Method for flow control and for reliable communication in a collaborative environment

Information

  • Patent Grant
  • 10474514
  • Patent Number
    10,474,514
  • Date Filed
    Monday, June 26, 2017
    7 years ago
  • Date Issued
    Tuesday, November 12, 2019
    5 years ago
Abstract
Flow control of messages to/from clients and services in a remote access system. Within the remote access system a layered architecture may be provided that includes a client layer at a client device, a service layer associated with a service, and a server layer that is a communications proxy for messages sent between the client device and the service. The service layer may have a receive queue for messages received from the client device and a send queue of messages destined for the client device. The server layer may include an outbound queue of messages destined for the client device. Full duplex communications between the client layer and the server layer may be implemented using a service protocol in a first channel for communication between the client layer and the server layer and a second channel for communication between the server layer and the client layer.
Description
BACKGROUND

Ubiquitous remote access to services, application programs and data has become commonplace as a result of the growth and availability of broadband and wireless network access. As such, users are accessing application programs and data using an ever-growing variety of client devices (e.g., mobile devices, table computing devices, laptop/notebook/desktop computers, etc.). Data may be communicated to the devices from a remote server over a variety of networks including, 3G and 4G mobile data networks, wireless networks such as WiFi and WiMax, wired networks, etc. Clients may connect to a server offering the services, applications programs and data across many disparate network bandwidths and latencies. However, the services usually connected to the server via a high-bandwidth low-latency dedicated connection.


To effectively use full duplex communications to connect many clients to, e.g., one service through a server, a flow control mechanism is needed. The great disparity in bandwidth between the service and the server, and the clients and the server, presents a challenge to the implementation of the service, which must be able to provide timely information to each client according to the limits and capability of each client's connection.


SUMMARY

Disclosed herein are systems and methods for providing flow control in a remote access system. The method may include providing a client layer at a client device; providing a service layer associated with a service, the service layer having a service receive queue for messages received from the client device and a service send queue of messages destined for the client device; providing a server layer as a communications proxy for messages sent between the client device and the service, the server layer containing an outbound client queue of messages destined for the client device; and communicating a control signal to the service layer to send the messages in the service send queue to the server layer for transmission to the client device.


In accordance with some implementations, there is provided a method of providing recoverable (reliable) communications, comprising providing a client layer that remotely accesses an application over a network connection; providing a server layer that proxies communications between the client layer and the application; facilitating full duplex communications between the client layer and the server layer by implementing an service protocol in a first channel for communication between the client layer and the server layer, and the service protocol in a second channel for communication between the server layer and the client layer; detecting a failure of communication on the first channel or the second channel; and upon detecting the failure on the first channel or the second channel, reestablishing communication using the service protocol.


In accordance with yet other implementations, there is provided a computer readable medium having computer executable instructions thereon that when executed by a processor of a computing device, performs a method comprising providing a service layer associated with a service, the service layer having a service receive queue for messages received from a client device and a service send queue of messages destined for the client device; providing a server layer as a communications proxy for messages sent between the client device and the service, the server layer containing an outbound client queue of messages destined for the client device; and communicating a control signal to the service layer to send the messages in the service send queue to the server layer for transmission to the client device.


Other systems, methods, features and/or advantages will be or may become apparent to one with skill in the art upon examination of the following drawings and detailed description. It is intended that all such additional systems, methods, features and/or advantages be included within this description and be protected by the accompanying claims.





BRIEF DESCRIPTION OF THE DRAWINGS

The components in the drawings are not necessarily to scale relative to each other. Like reference numerals designate corresponding parts throughout the several views.



FIG. 1 is a simplified block diagram illustrating a system for providing remote access to an application at a remote device via a computer network;



FIG. 2 is a simplified block diagram illustrating operation of the remote access program;



FIG. 3A illustrates a layered architecture in accordance with the present disclosure;



FIG. 3B illustrates another layered architecture in accordance with the present disclosure that illustrates a many-to-one relationship of clients communicating to a service;



FIG. 4 illustrates a flow diagram of example operations performed within the system of FIGS. 1-2, 3A and 3B; and



FIG. 5 illustrates a flow diagram of example operations performed within the system of FIGS. 1-2, 3A and 3B; and



FIG. 6 illustrates an exemplary computing device.





DETAILED DESCRIPTION

Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art. Methods and materials similar or equivalent to those described herein can be used in the practice or testing of the present disclosure. While implementations will be described for remotely accessing applications, it will become evident to those skilled in the art that the implementations are not limited thereto, but are applicable for remotely accessing any type of data or service via a remote device.


Referring to FIG. 1, a system 100 for providing remote access to an application, data or other service via a computer network. The system comprises a client computer 112A or 112B, such as a wireless handheld device such as, for example, an IPHONE 112A or a BLACKBERRY 112B—connected via a computer network 110 such as, for example, the Internet, to a server 102B. Similarly, the client computing devices may also include a desktop/notebook personal computer 112C or a tablet device 112N that are connected by the communication network 110 to the server 102B. It is noted that the connections to the communication network 110 may be any type of connection, for example, Wi-Fi (IEEE 802.11x), WiMax (IEEE 802.16), Ethernet, 3G, 4G, etc.


The server 102B is connected, for example, via the computer network 110 to a Local Area Network (LAN) 109 or may be directly connected to the computer network 110. For example, the LAN 109 is an internal computer network of an institution such as a hospital, a bank, a large business, or a government department. Typically, such institutions still use a mainframe computer 102A and a database 108 connected to the LAN 109. Numerous application programs 107A may be stored in memory 106A of the mainframe computer 102A and executed on a processor 104A. Similarly, numerous application programs 107B may be stored in memory 106B of the server 102B and executed on a processor 104B. As used herein, the application programs 107A and 107B may be “services” offered for remote access. The mainframe computer 102A, the server 102B and the client computers 112A, 112B, 112C or 112N may be implemented using hardware such as that shown in the general purpose computing device of FIG. 6.


A client remote access program 121A, 121B, 121C, 121N may be designed for providing user interaction for displaying data and/or imagery in a human comprehensible fashion and for determining user input data in dependence upon received user instructions for interacting with the application program using, for example, a graphical display with touch-screen 114A or a graphical display 114B/114N and a keyboard 116B/116C of the client computers 112A, 112B, 112C, 112N, respectively. For example, the client remote access program is performed by executing executable commands on processor 118A, 118B, 118C, 118N with the commands being stored in memory 120A, 120B, 120C, 120N of the client computer 112A, 112B, 112C, 112N, respectively. Alternatively, a user interface program is executed on the server 102B (as one of application programs 107B) which is then accessed via an URL by a generic client application such as, for example, a web browser executed on the client computer 112A, 112B. The user interface is implemented using, for example, Hyper Text Markup Language HTML 5.


The operation of a server remote access program 111B with the client remote access program 121A, 121B, 121C, 121N can be performed in cooperation with a state model, as illustrated in FIG. 2. When executed, the client remote access program 121A, 121B, 121C, 121N updates the state model in accordance with user input data received from a user interface program. The remote access program 121A, 121B, 121C, 121N may generate control data in accordance with the updated state model, and provide the same to the server remote access program 111B running on the server 102B.


Upon receipt of application data from an application program 107A or 1097B, the server remote access program 111B updates the state model in accordance with the screen or application data, generates presentation data in accordance with the updated state model, and provides the same to the client remote access program 121A, 121B, 121C, 121N on the client computing device. The state model comprises an association of logical elements of the application program with corresponding states of the application program, with the logical elements being in a hierarchical order. For example, the logical elements may be a screen, a menu, a submenu, a button, etc. that make up the application program user interface. This enables the client device, for example, to natively display the logical elements. As such, a menu of the application program that is presented on a mobile phone will look like a native menu of the mobile phone. Similarly, the menu of the application program that is presented on desktop computer will look like a native menu of the desktop computer operating system.


The state model is determined such that each of the logical elements is associated with a corresponding state of the application program 107A or 107B. The state model may be determined such that the logical elements are associated with user interactions. For example, the logical elements of the application program are determined such that the logical elements comprise transition elements with each transition element relating a change of the state model to one of control data and application representation data associated therewith.


In some implementations, two or more of the client computing devices 112A, 1126, 112C . . . 112N may collaboratively interact with the application program 107A or 107B. As such, by communicating state information between each of the client computing devices 112A, 112B, 112C . . . 112N participating in a collaborative session and the mainframe computer 102A or server 102B, each of the participating client computing devices 112A, 112B, 112C . . . 112N may present a synchronized view of the display of the application program 107A or 107B.



FIG. 3A illustrates a layered architecture in accordance with the present disclosure. The layered architecture implements a flow control process which allows, e.g., the server 1026 to effectively use full duplex communications to connect one or more clients (112A, 1126, 112C, 112N, etc.) to one service 107A or 107B through the server 102B. For example, full duplex communications may be achieved by using two half-duplex communications connections 305 between the clients 112A, 112B, 112C, 112N and the server remote access program 111B. As such, there is provided a mechanism for the service to tailor the production of information to match a client's network connection characteristics independent of other clients' network connection characteristics.


As illustrated in FIG. 3A, there are three layers in the system: a client layer 302, a server layer 304 and a service layer 306. In accordance with the present disclosure, the layered architecture accounts for the disparity in bandwidth between service/server and server/client(s). In particular, because of the disparities, the service 306 may asynchronously produce information at a rate for each client 112A, 112B, 112C, 112N such that the outbound message queues (described below) maintained on the server 102B for one or more clients grow much faster than they can be emptied. This may lead to undesirable performance characteristics, because the older information in the queues must be transmitted before newer information. This becomes more problematic as time passes and queue lengths expand because the client view becomes further and further behind the current state of the service.


The client layer 302 resides in the each client 112A, 112B, 112C, 112N and is used to connect to the server layer 304. Typically, this includes the client application 121A, 121B, 121C, 121N, e.g., a web browser, dedicated application, etc. The server layer 304 serves to proxy communications from each client 112A, 112B, 112C, 112N to the service layer 306, as described below. In accordance with the present disclosure, each client 112A, 112B, 112C, 112N has a unique identifier (e.g., a session id). Each client layer 302 contains a client receive queue 310 of messages received from the server 102B, which are sent on behalf of a service (application 107A or 107B) to which the client is logically connected. Each client layer 302 also contains a client send queue 312 of messages it sends to the server 102B, which are destined for one particular service (e.g., 107B).


An instance of the service layer 306 (e.g., associated with application 107A or 107B) connects to the server layer 304, which proxies communications from the service to each client logically associated with that service. The service layer 306 contains a service receive queue 318 of messages it receives from the server on behalf of all clients the service is logically connected to. Each message is labeled with the unique identifier of the client the server received the message from. The service layer 306 also contains a service send queue 320 of messages it sends to the server 102B. Each message is destined for only one of the clients logically connected to the server 102B. If the server 102B needs to send certain content to each client, it creates a message containing that content for each logically connected client. Each message is labeled with the unique identifier the client the server should send the message to.



FIG. 3B illustrates another layered architecture in accordance with the present disclosure that illustrates a many-to-one relationship of clients to a service. Like elements to the architecture of FIG. 3A are not described again hereinbelow. FIG. 3B illustrates two clients (112A and 112N) communicating to one service (107A or 107B). The server layer 304 logically connects clients to the service layer 306 in a many-to-one relationship by providing a service multiplexor 322 and a client de-mux 326 that serve to receive communications from the clients and from the service, respectively, as described below.


In FIG. 3B, the server layer 304 acts as a communications proxy for all messages sent between the clients and the service. Messages from of each connected client are received at the service multiplexor 322 in the server layer 304, which labels the inbound messages using a unique identifier associated with the particular client 121A or 121N that sent the message to the server layer 304. The unique identifier may be, for example, the session id. The service multiplexor 322 then combines (multiplexes) the messages into a single stream of messages destined for the service layer 306. The single stream of messages my first be queued in the receive queue 313.


When the service layer 306 communicates to the clients, outbound messages are queued in the service send queue 320 and are communicated to the client de-mux 326 in the server layer 304. The client de-mux 326 examines the unique identifier of the message and parses the messages into an appropriate client queue 314 or 316 in accordance with the unique identifier of the message and the destination client device. The messages are communicated from the client queue 314 or 316 to the client 121A or 121N over its respective half-duplex communication connection 305.


While the architecture of FIG. 3B is described as connecting plural clients to a service in a many-to-one fashion, the architecture could also connect a single client to the service where the service multiplexor 322 and client de-mux 326 would merely pass messages streams between the client and the service without alteration thereof.



FIG. 4 illustrates an operational flow diagram 400 of providing a control signal that controls the message flow associated with a service for each logically connected client in the architectures of FIGS. 3A, 3B. Initially, the service layer 306 is in a “transmission off” state with respect to each logically connected client (S402; FIG. 4). That is, the service layer 306 will not produce any messages for any logically connected client. When the service layer 306 receives a “transmission on” message from the server 102B containing the unique identifier of one of the clients, it enters the “transmission on” state with respect to that client (S404; FIG. 4). In accordance with some implementations (path “A” in FIG. 4), when in the “transmission on” state for a given client, the service layer 306 will send all outstanding messages for that client to the server layer 304 via a low-latency, high-bandwidth link 308, which may also provide for full-duplex communications using two half-duplex links (S406; FIG. 4), and then re-enter the “transmission off” state with respect to that client (S408; FIG. 4).


In some implementations, the service layer 306 may also allow its subcomponents to asynchronously generate message fulfillment requests for any client. In this implementation (path “B” in FIG. 4), when the service layer 306 enters the “transmission on” state for a given client, it will generate new messages associated with pending message fulfillment requests for the client (S407; FIG. 4), send any outstanding and newly generated messages to the server (S409; FIG. 4), and then re-enter the “transmission off” state with respect to that client (S411; FIG. 4).


When the server layer 304 receives a message from a logically connected client, the server layer 304 labels the message with the client's unique identifier and sends it to the service layer 306. In some implementations, the messages are labeled and combined by the service multiplexor 322 into a single stream of messages. The inbound messages may be queued in a receive queue 313.


When the server layer 304 receives a message from the service layer 306, the server layer 304 places the message in a client queue 314 or 316 of outbound messages to be sent to the client for which the message is labeled. In some implementations, the outstanding messages may be processed by the client de-mux 326 before being placed into the client queue 314 or 316. When the outbound message client queue 314 or 316 for a client is empty, the server layer 304 sends a “transmission on” message containing the unique identifier of the client to the associated service layer (S403; FIG. 4). In an implementation, the size of the outbound message client queues 314 and 316 on the server does not grow unbounded, and that any information placed in the outbound queues will always be relatively “fresh.”


Thus, as described above message queues may be managed to account for latency and bandwidth disparities. The result of the above is that an outbound message queue on the server is only empty under two conditions: (1) there is no information to be sent to a client, (2) if enough time and bandwidth have allowed the server to send all of the messages, or some combination of (1) and (2).


With reference to FIGS. 3A and 3B, in accordance with some implementations, HTTP/HTTPS protocols may be used as a service protocol to simulate the full duplex connection between client and server by using two concurrent HTTP/HTTPS connections over the communication connections 305. With HTTP/HTTPS, it is possible to send an “infinite request” from the client to the server. In this case, the server is instructed that the length of the request payload is indeterminate, and in this fashion, information may be streamed continually from the client to the server. It is also possible to receive an “infinite response” over HTTP/HTTPS. In this case, the client is instructed that the length of the response payload is indeterminate, and in this fashion, information may be continually streamed from the server to the client. Thus, a full duplex communication circuit may be fashioned from two half duplex HTTP/HTTPS channels. As will be described, the client initiates both connections, listening on the “infinite response” connection while transmitting on the “infinite request” connection.


As noted above, each client layer 302 contains a client receive queue 310 of messages received from the server 102B, which are sent on behalf of a service (application program 107A, 107B) to which the client is logically connected. The client layer 302 receives these messages using an “infinite response” over one HTTP/HTTPS connection (although any other half duplex connection will suffice). Each client layer 302 also contains a client send queue 312 of messages it sends to the server 102B, which are destined for one particular service (e.g., application program 107A, 107B). The client layer 302 sends these messages using an “infinite request” over a second HTTP/HTTPS connection (although any other half duplex connection will suffice).


When the server layer 304 receives a message from a client, the server layer 304 labels the message with the client's unique identifier and sends it to the service layer 306. The message is typically received by the server layer 304 via an “infinite request” made over one HTTP/HTTPS connection. When the server layer 304 receives a message from the service layer 306, the server layer 304 places the message in the client queue 314 or 316 of outbound messages to be sent to the client for which the message is labeled. The messages are then sent from the client queue 314 or 316 via an “infinite response” made over a second HTTP/HTTPS connection.


A failure of the HTTP/HTTPS communications channel (or any half duplex communications channel) utilized in the layered architecture may result in undesirable performance characteristics. Distinguishing between silence on the listening end of a half duplex channel and failure of that channel is sometimes difficult, whereas detecting a failure of a half duplex channel from the transmitting end is generally not a problem. Also, one or more messages may be “in flight” between the client and server when a half duplex communications channel fails. In this circumstance, there is no way for the transmitting end of a half duplex channel to know which messages have been successfully received.


Thus, in accordance with implementations of the present disclosure, there is provided methods for detection of communications failure from the listening side of a connection and recovery of information that may have been “in flight” over one or both connections at the time of failure. Further, while the information has been described as being communicated using HTTP/HTTPS protocols, other protocols that provide connection-based communications maybe utilized, such as FTP, UDP, RTSP, RTP, etc.



FIG. 5 illustrates an operational flow diagram 500 of providing reliable messaging associated with a service for each logically connected client as follows. For example, in some implementations, frequent “keep alive” messages may be sent from service to client and from client to service (S502; FIG. 5). This allows the listening end of a communication channel to distinguish silence from failure. If the listening end of a half duplex channel does not receive a “keep-alive” message within an expected period of time, it may assume that the connection has failed, and take steps to initiate a new connection (client side), or wait for new connection (server side) (S504; FIG. 5).


In some implementations, independent message sequence numbers may be used on the client and server. Every message sent from the client send queue 312 to the server may be annotated with a sequence number indicating the order in which each message is sent relative to all other messages sent to the server (S506; FIG. 5). When a message is sent from the client, it is not immediately removed from the client's outbound queue. The next message in the client's outbound message queue becomes the next message to be sent, under normal operation. Similarly, every message sent from the server outbound message client queue 314 or 316 to a client is annotated with a sequence number indicating the order in which each message is sent relative to all other messages sent to the client (S506; FIG. 5). When a message is sent from the server, it is not immediately removed from the server's outbound queue. The next message in the server's outbound message queue becomes the next message to be sent, under normal operation.


In some implementations, building upon the previous implementations, “sequence acknowledgement” message annotations may be used between the client and server so that each side knows what messages have been successfully received by the other. Every time the client sends a message to the server, the client annotates the message with the sequence number of the last message it successfully received from the server on the “listening” connection (S508; FIG. 5). If the server receives this message, the server can safely remove the message associated with the sequence number from its outbound queue, and all preceding it (S510; FIG. 5). Similarly, every time the server sends a message to the client, the server annotates the message with the sequence number of the last message it successfully received from the client on the “listening” connection (S508; FIG. 5). If the client receives this message, the client can safely remove the message associated with the sequence number from its outbound queue, and all preceding it (S510; FIG. 5).


In some implementations, when a new connection is initiated because of failure, any messages previously sent that remain in the associated outbound queue may not have been received by the other side, and are re-sent. When the client detects a failure on the “infinite response” channel (i.e., the channel it is listening on), it abandons that connection and initiates a new HTTP/HTTPS “infinite response” connection (S512; FIG. 5). If this succeeds, the server recognizes the new connection and starts sending from the start of its outbound message queue, instead of from after the last message it previously sent (S514; FIG. 5). In this fashion, any in-transit messages that the client may not have received will be resent. The client may detect some duplicate messages that it had not been able to successfully acknowledge to the server, and it can safely throw such duplicate messages away (S516; FIG. 5). Such messages will have a sequence number less than or equal to the last successful sequence number received before failure detection. Similarly, when the client detects a failure on the “infinite request” channel (i.e., the channel it is transmitting on), it abandons that connection and initiates a new HTTP/HTTPS “infinite request” connection (S512; FIG. 5). If this succeeds, it starts sending messages from the start of its outbound queue, instead of from after the last message it previously sent (S514; FIG. 5). The server recognizes the new connection and safely ignores any duplicate messages it receives that it may not have had a chance to successfully acknowledge (S516; FIG. 5). Such messages will have a sequence number less than or equal to the last successful sequence number received before the new connection was initiated. Thus, as long as the client is able to successfully initiate new HTTP/HTTPS connections, all communications failures can be recovered from successfully. In the case that a new connection cannot be made within an acceptable period of time, it then timeouts on both the server, and client side can be used to terminate the session, recover resources, and take other appropriate actions. If a connection is attempted by the client after the session is terminated on the server, then the server will reject the connection attempt, and the client can inform the user and/or take other appropriate actions.


It is noted that combinations of the above techniques may be used to provide reliable (recoverable) communications. Thus, the above techniques provide for reliable communication between a client and a server while providing full-duplex communication capabilities.



FIG. 6 shows an exemplary computing environment in which example embodiments and aspects may be implemented. The computing system environment is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality.


Numerous other general purpose or special purpose computing system environments or configurations may be used. Examples of well known computing systems, environments, and/or configurations that may be suitable for use include, but are not limited to, personal computers, servers, handheld or laptop devices, multiprocessor systems, microprocessor-based systems, network personal computers (PCs), minicomputers, mainframe computers, embedded systems, distributed computing environments that include any of the above systems or devices, and the like.


Computer-executable instructions, such as program modules, being executed by a computer may be used. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Distributed computing environments may be used where tasks are performed by remote processing devices that are linked through a communications network or other data transmission medium. In a distributed computing environment, program modules and other data may be located in both local and remote computer storage media including memory storage devices.


With reference to FIG. 6, an exemplary system for implementing aspects described herein includes a computing device, such as computing device 600. In its most basic configuration, computing device 600 typically includes at least one processing unit 602 and memory 604. Depending on the exact configuration and type of computing device, memory 604 may be volatile (such as random access memory (RAM)), non-volatile (such as read-only memory (ROM), flash memory, etc.), or some combination of the two. This most basic configuration is illustrated in FIG. 6 by dashed line 606.


Computing device 600 may have additional features/functionality. For example, computing device 600 may include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape. Such additional storage is illustrated in FIG. 6 by removable storage 608 and non-removable storage 610.


Computing device 600 typically includes a variety of tangible computer readable media. Computer readable media can be any available tangible media that can be accessed by device 600 and includes both volatile and non-volatile media, removable and non-removable media.


Tangible computer storage media include volatile and non-volatile, and removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Memory 604, removable storage 608, and non-removable storage 610 are all examples of computer storage media. Tangible computer storage media include, but are not limited to, RAM, ROM, electrically erasable program read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 600. Any such computer storage media may be part of computing device 600.


Computing device 600 may contain communications connection(s) 612 that allow the device to communicate with other devices. Computing device 600 may also have input device(s) 614 such as a keyboard, mouse, pen, voice input device, touch input device, etc. Output device(s) 616 such as a display, speakers, printer, etc. may also be included. All these devices are well known in the art and need not be discussed at length here.


It should be understood that the various techniques described herein may be implemented in connection with hardware or software or, where appropriate, with a combination of both. Thus, the methods and apparatus of the presently disclosed subject matter, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the presently disclosed subject matter. In the case of program code execution on programmable computers, the computing device generally includes a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. One or more programs may implement or utilize the processes described in connection with the presently disclosed subject matter, e.g., through the use of an application programming interface (API), reusable controls, or the like. Such programs may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language and it may be combined with hardware implementations.


Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims
  • 1. A method of providing reliable communications, comprising: providing a client layer that remotely accesses a service over a network connection;providing a server layer that proxies communications between the client layer and the service;facilitating full duplex communications between the client layer and the server layer by implementing a service protocol in a first channel for communication between the client layer and the server layer, and the service protocol in a second channel for communication between the server layer and the client layer;communicating first sequence acknowledgement messages over the first channel that include a first sequence number of a last message received from the server;communicating, in response to the first sequence acknowledgment messages, second sequence acknowledgement messages over the second channel that include a second sequence number of a last message received from the client;detecting a failure of communication on the first channel or the second channel by sending keep alive messages from the service to the client layer and from client layer to the service, and if one of the client layer or service does not receive the keep alive messages within a predetermined period of time, determining that the first channel or the second channel on which the keep alive messages have not been received has failed; andupon detecting the failure on the first channel or the second channel, reestablishing communication using the service protocol and either a last first sequence number received by the server layer or a last second sequence number received by the client layer.
  • 2. The method of claim 1, wherein the service protocol is Hypertext Transfer Protocol (HTTP) or Hyper Text Transfer Protocol Secure (HTTPS).
  • 3. The method of claim 2, further comprising: sending an infinite request from the client layer to the server layer on the first channel; andsending an infinite response from server to the client on the second channel.
  • 4. The method of claim 1, further comprising: applying independent message sequence numbers to each message sent between the client layer and the server layer to indicate an order in which each message is sent;sending a message from the client layer or the server layer; andleaving the message in an outbound client queue at the client layer or in a service send queue in the server layer, respectively.
  • 5. The method of claim 4, further comprising: receiving a subsequent message from the server layer or the client layer, the subsequent message annotated with a sequence number of a last message successfully received from the client layer or the server layer; andremoving messages having smaller sequence numbers than the sequence number from the outbound client queue and the service send queue.
  • 6. The method of claim 1, further comprising: resending any messages previously sent that remain in an outbound client queue of the server layer or in a service send queue in client layer when the failure is detected.
  • 7. A method for reliable messaging between a service and a client, comprising: listening, at the client, for periodic keep alive messages that are communicated by the service on a first half-duplex channel;listening, at the service, for periodic keep alive messages that are communicated by the client on a second half-duplex channel;communicating messages between the client and the sever over the first half-duplex channel and the second half-duplex channel, wherein each message contains a unique message sequence number;communicating a sequence acknowledgement message between the client and the service in response to the messages over the first half-duplex channel and the second half-duplex channel, the sequence acknowledgement message containing the unique message sequence number that was last successfully received by the service and the client;detecting a communication failure on the first half-duplex channel or the second half-duplex channel if the periodic keep alive messages are not received at the client after a predetermined period of time in order to determine that the first half-duplex channel or the second half-duplex channel on which the keep alive messages have not been received has failed;reestablishing, by the client, communication with the service over a new first half-duplex channel or a new second half-duplex channel in accordance with the detecting; andcommunicating, by the service, messages over the first half-duplex channel or the new first half-duplex channel beginning with a unique sequence number that is after a unique sequence number associated with a last sequence acknowledgement message received by the service.
  • 8. The method of claim 7, further comprising: waiting, at the service, for the client to reestablish communication over the new first half-duplex connection or the second half-duplex channel.
  • 9. The method of claim 7, further comprising using the unique sequence number to indicate an order in which each message is sent relative to all other messages.
  • 10. The method of claim 7, further comprising: queuing messages from the client to the server in a client outbound queue; andremoving a message from the client outbound queue after the sequence acknowledgement message containing the unique message sequence number of the message is received at the client.
  • 11. The method of claim 7, further comprising: queuing messages from the server to the client in a service outbound queue; andremoving a message from the service outbound queue after the sequence acknowledgement message containing the unique message sequence number of the message is received at the service.
  • 12. The method of claim 7, wherein the first half-duplex channel is an infinite response channel established using Hypertext Transfer Protocol (HTTP) or Hyper Text Transfer Protocol Secure (HTTPS).
  • 13. A method for reliable messaging between a service and a client, comprising: listening, at the client, for periodic keep alive messages that are communicated by the service on a first half-duplex channel;listening, at the service, for periodic keep alive messages that are communicated by the client on a second half-duplex channel;communicating messages from the client to the service over the second half-duplex channel in response to associated previous messages communicated from the service to the client, wherein each message contains a unique message sequence number;detecting a communication failure on the first half-duplex channel or the second half-duplex channel if the periodic keep alive messages are not received at the client after a predetermined period of time in order to determine that the first half-duplex channel or the second half-duplex channel on which the keep alive messages have not been received has failed;reestablishing, by the client, communication with the service over a new first half-duplex channel or new second half-duplex channel in accordance with the detecting; andresuming communicating messages from the client to the service over the second half-duplex channel or the new second half-duplex channel in accordance with a last unique message sequence number received by the client.
  • 14. The method of claim 13, further comprising using the unique sequence number to indicate an order in which each message is sent relative to all other messages.
  • 15. The method of claim 13, further comprising: queuing messages from the client to the server in a client outbound queue; andremoving a message from the client outbound queue after the sequence acknowledgement message containing the unique message sequence number of the message is received at the client.
  • 16. The method of claim 13, wherein the first half-duplex channel is an infinite response channel established using Hypertext Transfer Protocol (HTTP) or Hyper Text Transfer Protocol Secure (HTTPS).
  • 17. The method of claim 13, further comprising: communicating a sequence acknowledgement message from the service to the client over the first half-duplex channel,wherein the sequence acknowledgement message contains a unique message sequence number that was last successfully received by the service.
  • 18. The method of claim 17, further comprising, resuming communicating messages from the client to the service beginning with a unique sequence number that is after a unique sequence number associated with a last sequence acknowledgement message received from the service.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a division of 13/586,085, filed Aug. 15, 2012, entitled “Method for Flow Control and for Reliable Communication in a Collaborative Environment.” This application claims priority to U.S. Provisional Patent Application No. 61/523,662, filed Aug. 15, 2011, entitled “Method for Recoverable and Reliable Communications in a Collaborative Environment.” This application also claims priority to U.S. Provisional Patent Application No. 61/523,670, filed Aug. 15, 2011, entitled “Method for Flow Control in a Collaborative Environment.” The disclosures of the above-referenced applications are incorporated herein by reference in their entireties.

US Referenced Citations (190)
Number Name Date Kind
4975690 Torres Dec 1990 A
5345550 Bloomfield Sep 1994 A
5555003 Montgomery et al. Sep 1996 A
5742778 Hao et al. Apr 1998 A
5844553 Hao et al. Dec 1998 A
5870559 Leshem et al. Feb 1999 A
5870759 Bauer et al. Feb 1999 A
5920311 Anthias Jul 1999 A
5978842 Noble et al. Nov 1999 A
6045048 Wilz et al. Apr 2000 A
6061689 Chang et al. May 2000 A
6075531 DeStefano Jun 2000 A
6141698 Krishnan et al. Oct 2000 A
6145098 Nouri et al. Nov 2000 A
6253228 Ferris et al. Jun 2001 B1
6343313 Salesky et al. Jan 2002 B1
6453334 Vinson et al. Sep 2002 B1
6453356 Sheard et al. Sep 2002 B1
6570563 Honda May 2003 B1
6601233 Underwood Jul 2003 B1
6633688 Nixon Oct 2003 B1
6792607 Burd et al. Sep 2004 B1
6918113 Patel et al. Jul 2005 B2
6938096 Greschler et al. Aug 2005 B1
6970459 Meier Nov 2005 B1
6976077 Lehew et al. Dec 2005 B1
7065568 Bracewell et al. Jun 2006 B2
7069227 Lintel, III et al. Jun 2006 B1
7073059 Worely et al. Jul 2006 B2
7149761 Cooke et al. Dec 2006 B2
7167893 Malone et al. Jan 2007 B1
7174504 Tsao Feb 2007 B2
7181686 Bahrs Feb 2007 B1
7193985 Lewis et al. Mar 2007 B1
7240162 de Vries Jul 2007 B2
7246063 James et al. Jul 2007 B2
7320131 O'Toole, Jr. Jan 2008 B1
7343310 Stender Mar 2008 B1
7350151 Nakajima Mar 2008 B1
7418711 Lee Aug 2008 B1
7451196 de Vries et al. Nov 2008 B1
7484208 Nelson Jan 2009 B1
7577751 Vinson et al. Aug 2009 B2
7620901 Carpenter et al. Nov 2009 B2
7647370 Liu et al. Jan 2010 B1
7650444 Dirstine et al. Jan 2010 B2
7656799 Samuels et al. Feb 2010 B2
7706399 Janczak Apr 2010 B2
7725331 Schurenberg et al. May 2010 B2
7802183 Essin Sep 2010 B1
7831919 Viljoen et al. Nov 2010 B1
7921078 McCuller Apr 2011 B2
7941488 Goodman et al. May 2011 B2
7966572 Matthews et al. Jun 2011 B2
8024523 de Vries et al. Sep 2011 B2
8261345 Hitomi et al. Sep 2012 B2
8356252 Raman et al. Jan 2013 B2
8359591 de Vries et al. Jan 2013 B2
8509230 Vinson et al. Aug 2013 B2
8527706 de Vries et al. Sep 2013 B2
8572178 Frazzini et al. Oct 2013 B1
8606952 Pasetto et al. Dec 2013 B2
8607158 Molander et al. Dec 2013 B2
8615579 Vincent et al. Dec 2013 B1
8627081 Grimen et al. Jan 2014 B2
8667054 Tahan Mar 2014 B2
8832260 Raja et al. Sep 2014 B2
8924512 Stoyanov et al. Dec 2014 B2
20010047393 Arner et al. Nov 2001 A1
20020032751 Bharadwaj Mar 2002 A1
20020032804 Hunt Mar 2002 A1
20020092029 Smith Jul 2002 A1
20030014735 Achlioptas et al. Jan 2003 A1
20030023670 Walrath Jan 2003 A1
20030055893 Sato et al. Mar 2003 A1
20030065738 Yang et al. Apr 2003 A1
20030120324 Osborn et al. Jun 2003 A1
20030120762 Yepishin et al. Jun 2003 A1
20030149941 Tsao Aug 2003 A1
20030163514 Waldschmidt Aug 2003 A1
20030184584 Vachuska et al. Oct 2003 A1
20030208472 Pham Nov 2003 A1
20030233594 Earl Dec 2003 A1
20040015842 Nanivadekar et al. Jan 2004 A1
20040029638 Hytcheson et al. Feb 2004 A1
20040039742 Barsness et al. Feb 2004 A1
20040068516 Lee et al. Apr 2004 A1
20040106916 Quaid et al. Jun 2004 A1
20040117804 Scahill et al. Jun 2004 A1
20040153525 Borella Aug 2004 A1
20040162876 Kohavi Aug 2004 A1
20040183827 Putterman et al. Sep 2004 A1
20040236633 Knauerhase et al. Nov 2004 A1
20040243919 Muresan et al. Dec 2004 A1
20040249885 Petropoulakis et al. Dec 2004 A1
20050005024 Samuels et al. Jan 2005 A1
20050010871 Ruthfield et al. Jan 2005 A1
20050021687 Anastassopoulos et al. Jan 2005 A1
20050050229 Comeau et al. Mar 2005 A1
20050138631 Bellotti et al. Jun 2005 A1
20050188046 Hickman et al. Aug 2005 A1
20050188313 Matthews et al. Aug 2005 A1
20050198578 Agrawala et al. Sep 2005 A1
20050240906 Kinderknecht et al. Oct 2005 A1
20060004874 Hutcheson et al. Jan 2006 A1
20060026006 Hindle Feb 2006 A1
20060031377 Ng et al. Feb 2006 A1
20060036770 Hosn et al. Feb 2006 A1
20060041673 Sturrock Feb 2006 A1
20060085835 Istvan et al. Apr 2006 A1
20060101397 Mercer et al. May 2006 A1
20060130069 Srinivasan et al. Jun 2006 A1
20060231175 Vondracek et al. Oct 2006 A1
20060236328 DeWitt Oct 2006 A1
20060258462 Cheng et al. Nov 2006 A1
20060265689 Kuznetsov et al. Nov 2006 A1
20060271563 Angelo et al. Nov 2006 A1
20060288171 Tsien Dec 2006 A1
20060294418 Fuchs Dec 2006 A1
20070024645 Purcell et al. Feb 2007 A1
20070047535 Varma Mar 2007 A1
20070067754 Chen et al. Mar 2007 A1
20070079244 Brugiolo Apr 2007 A1
20070112880 Yang et al. May 2007 A1
20070120763 De Paepe et al. May 2007 A1
20070130292 Tzruya et al. Jun 2007 A1
20070136677 Agarwal Jun 2007 A1
20070203944 Batra et al. Aug 2007 A1
20070208718 Javid et al. Sep 2007 A1
20070226636 Carpenter et al. Sep 2007 A1
20070244990 Wells Oct 2007 A1
20070245409 Harris Oct 2007 A1
20070256073 Truong et al. Nov 2007 A1
20070282951 Selimis et al. Dec 2007 A1
20080016155 Khalatian Jan 2008 A1
20080134211 Cui Jun 2008 A1
20080146194 Yang et al. Jun 2008 A1
20080183190 Adcox et al. Jul 2008 A1
20080313282 Warila et al. Dec 2008 A1
20090080523 McDowell Mar 2009 A1
20090089742 Nagulu et al. Apr 2009 A1
20090119644 de Vries et al. May 2009 A1
20090209239 Montesdeoca Aug 2009 A1
20090217177 DeGrazia Aug 2009 A1
20090044171 Avadhanula Dec 2009 A1
20100061238 Godbole et al. Mar 2010 A1
20100070588 Sinn Mar 2010 A1
20100077058 Messer Mar 2010 A1
20100131591 Thomas et al. May 2010 A1
20100150031 Allen et al. Jun 2010 A1
20100174773 Penner et al. Jul 2010 A1
20100205147 Lee Aug 2010 A1
20100223566 Holmes et al. Sep 2010 A1
20100268813 Pahlavan et al. Oct 2010 A1
20110138283 Marston Jun 2011 A1
20110157196 Nave et al. Jun 2011 A1
20110162062 Kumar et al. Jun 2011 A1
20110184993 Chawla et al. Jul 2011 A1
20110213830 Lopez et al. Sep 2011 A1
20110222442 Cole et al. Sep 2011 A1
20110252152 Sherry et al. Oct 2011 A1
20120030275 Boller et al. Feb 2012 A1
20120084713 Desai et al. Apr 2012 A1
20120090004 Jeong Apr 2012 A1
20120133675 McDowell May 2012 A1
20120154633 Rodriguez Jun 2012 A1
20120221792 de Vries et al. Aug 2012 A1
20120226742 Momchilov et al. Sep 2012 A1
20120245918 Overton et al. Sep 2012 A1
20120246225 Lemire et al. Sep 2012 A1
20120271875 Cai Oct 2012 A1
20120324032 Chan Dec 2012 A1
20120324358 Jooste Dec 2012 A1
20130007227 Hitomi et al. Jan 2013 A1
20130031618 Momchilov Jan 2013 A1
20130046815 Thomas et al. Feb 2013 A1
20130054679 Jooste Feb 2013 A1
20130070740 Yovin Mar 2013 A1
20130117474 Ajanovic et al. May 2013 A1
20130138791 Thomas et al. May 2013 A1
20130147845 Xie et al. Jun 2013 A1
20130159062 Stiehl Jun 2013 A1
20130179962 Arai et al. Jul 2013 A1
20130262566 Stephure et al. Oct 2013 A1
20140240524 Julia et al. Aug 2014 A1
20140241229 Bertorelle et al. Aug 2014 A1
20140298420 Barton et al. Oct 2014 A1
20150067769 Barton et al. Mar 2015 A1
20150156133 Leitch et al. Jun 2015 A1
20150319252 Momchilov et al. Nov 2015 A1
Foreign Referenced Citations (29)
Number Date Country
101539932 Sep 2009 CN
101883097 Nov 2010 CN
102129632 Jul 2011 CN
102821413 Dec 2012 CN
0349463 Jan 1990 EP
1422901 May 2004 EP
2007084744 Mar 1995 JP
2002055870 Feb 2002 JP
2004287758 Oct 2004 JP
2005031807 Feb 2005 JP
2005521946 Jul 2005 JP
2008099055 Apr 2008 JP
2010256972 Nov 2010 JP
2295752 Mar 2007 RU
2298287 Apr 2007 RU
2305860 Sep 2007 RU
1998058478 Dec 1998 WO
2001016724 Mar 2001 WO
2002009106 Jan 2002 WO
2003032569 Apr 2003 WO
2003083684 Oct 2003 WO
2010060206 Jun 2010 WO
2010088768 Aug 2010 WO
2010127327 Nov 2010 WO
2012127308 Sep 2012 WO
2013024343 Feb 2013 WO
2013109984 Jul 2013 WO
2013128284 Sep 2013 WO
2013153439 Oct 2013 WO
Non-Patent Literature Citations (29)
Entry
Clark, C., et al., “Live Migration of Virtual Machines,” Proceedings of the 2nd conference on Symposium on Networked Systems Design & Implementation (NSDI'05), vol. 2, 2005, pp. 273-286.
Coffman, Daniel, et al., “A Client-Server Architecture for State-Dependent Dynamic Visualizations on the Web,” IBM T.J. Watson Research Center, 2010, 10 pages.
Fraser, N., “Differential Synchronization,” Google, Mountain View, CA, Jan. 2009, 8 pages.
Jourdain, Sebastien, et al., “ParaViewWeb: A Web Framework for 3D Visualization and Data Processing,” International Journal of Computer Information Systems and Industrial Management Applications, vol. 3, 2011, pp. 870-877.
Kozuch, M., Internet Suspend/Resume, Proceedings of the Fourth IEEE Workshop on Mobile Computing Systems and Applications (WMCSA'02), 2002, 7 pages.
Microsoft Computer Dictionary, Microsoft Press, 5th Edition, Mar. 15, 2002, p. 624.
Mitchell, J. Ross, et al., A Smartphone Client-Server Teleradiology System for Primary Diagnosis of Acute Stroke, Journal of Medical Internet Research, vol. 13, Issue 2, 2011, 12 pages.
ParaViewWeb, KitwarePublic, retrieved on Jan. 27, 2014 from http://www.paraview.org/Wiki/ParaViewWeb, 1 page.
Remote Desktop Protocol (RDP), retrieved on May 4, 2014 from http://en.wikipedia.org/wiki/Remote_Desktop_Protocol, 7 pages.
Remote Desktop Services (RDS), Remote App, retrieved on May 4, 2014 from http://en.wikipedia.org/wiki/Remote_Desktop_Services, 9 pages.
Remote Desktop Services (RDS), Windows Desktop Sharing, retrieved on May 4, 2014 from http://en.wikipedia.org/wiki/Remote_Desktop_Services, 9 pages.
Search Report, dated Dec. 30, 2016, in connection with CN Application No. 2012800398130. (and English Translation).
International Search Report and Written Opinion, dated Jun. 30, 2016, in connection International Patent Application No. PCT/IB2016/000277.
International Preliminary Report on Patentability and Written Opinion, dated May 31, 2016, in connection International Patent Application No. PCT/US2014/064243.
Search Report and Written Opinion, dated Nov. 16, 2015, in connection with SG Application No. 2013087150.
International Search Report and Written Opinion, dated Mar. 19, 2015, in connection with International Application No. PCT/US2014/064243.
Extended European Search Report, dated Mar. 3, 2015, in connection with related European Application No. 09828497.9.
Extended European Search Report, dated Jun. 2, 2014, in connection with European Application No. 12760310.8.
International Search Report, dated Feb. 19, 2010, in connection with International Application No. PCT/CA2009/001704.
International Preliminary Report on Patentability and Written Opinion, dated May 31, 2011, in connection with International Application No. PCT/CA2009/001704.
International Search Report, dated May 12, 2010, in connection with International Application No. PCT/CA2010/000154.
International Preliminary Report on Patentability and Written Opinion, dated Aug. 9, 2011, in connection with International Application No. PCT/CA2010/000154.
International Search Report and Written Opinion, dated Jul. 31, 2012, in connection with International Application No. PCT/IB2012/000562.
International Search Report, dated Dec. 20, 2012, in connection with International Application No. PCT/IB2012/001589.
International Preliminary Report on Patentability and Written Opinion, dated Feb. 18, 2014, in connection with International Application No. PCT/IB2012/001589.
International Search Report, dated Dec. 28, 2012, in connection with International Application No. PCT/IB2012/001590.
International Preliminary Report on Patentability and Written Opinion, dated Feb. 18, 2014, in connection with International Application No. PCT/IB2012/001590.
International Search Report and Written Opinion, dated Aug. 21, 2013, in connection with International Application No. PCT/IB2013/000676.
International Search Report and Written Opinion, dated Jul. 31, 2013, in connection with International Application No. PCT/IB2013/000720.
Related Publications (1)
Number Date Country
20170357541 A1 Dec 2017 US
Provisional Applications (2)
Number Date Country
61523662 Aug 2011 US
61523670 Aug 2011 US
Divisions (1)
Number Date Country
Parent 13586085 Aug 2012 US
Child 15632539 US