METHOD AND SYSTEM FOR DYNAMIC CLIENT/SERVER NETWORK MANAGEMENT USING PROXY SERVERS

Information

  • Patent Application
  • 20090049173
  • Publication Number
    20090049173
  • Date Filed
    August 16, 2007
    17 years ago
  • Date Published
    February 19, 2009
    15 years ago
Abstract
The invention discloses a programming method and system for dynamic client/server network management using proxy servers, by allowing each active proxy server in an arrayed cluster to maintain an updated list of all other operating proxy servers in the cluster. When a client message requesting access to an application server is received by a clustered proxy server, the message may be forwarded to another proxy server (within the cluster) so that message (re)transmissions can pass through the same proxy server as the original message, allowing a proxy server to make consistent routing decisions (and other decisions) pertaining to that message.
Description
TECHNICAL FIELD

The invention relates to computer programming using proxy servers for client/server network management.


BACKGROUND

The client/server model of distributed computing operates to fulfill user needs by splitting functions between “client” tasks and “server” tasks performed by various computer hardware and software resources that are organized into a “network” for communication with each other, such as a local area network (“LAN”) or a wide area network (“WAN”) or the Internet. Using this model, a “client” program sends message requests to a “server” program in order to obtain data and/or processing action according to some communication “protocol” (i.e., a set of standard rules that determine how information is transmitted across a network) and the server completes the processing transaction by carrying out the request or deferring it to another time or by indicating that it cannot be fulfilled. This model allows clients and servers to be located (and to operate) independently of each other in a computer network, often using different hardware and operating systems appropriate to the function of each.


A “proxy server (or gateway)” is often used in handling client requests for transactions to be completed by other network “application servers” which are capable of performing the data processing actions required for the transaction but are not accessed directly by the client. If a processing transaction is not successfully completed upon initial transmission of a message, the client may send retransmissions of the message to an application server using an “arrayed cluster” (or group) of proxy servers. In that case, the cluster of proxy servers must route the retransmission(s) to the same application server as the original transmission, so that a retransmission is identical to the original transmission using the standards of RFC 3261. Current proxy server technology (such as that used with IBM WebSphere®) provides a partial solution to this problem by addressing (i.e., “hashing) the message to an array of application servers. This solution will work as long as the number (and the relative processing load(s) or “weight”) of each of the clustered application servers does not change. However, a retransmitted proxy message may be routed to a different application server than originally intended if a server starts (or stops) functioning between retransmissions of a message (or if the “weight” of one of the servers changes).


SUMMARY OF THE INVENTION

The invention provides for dynamic client/server network management using proxy servers. Specifically, a programming method and system is used for allowing each active proxy server in an arrayed cluster to maintain an updated list of all other operating proxy servers in the cluster (referred to as a “ProxyClusterArray”). When a client message (requesting access to a networked application server) is received by a clustered proxy server, the message may be forwarded to another proxy server (within the cluster) so that message (re)transmissions can pass through the same proxy server as the original message, allowing a proxy server to make consistent routing (and other) decisions pertaining to that message.


When a proxy server receives a message from a user client requesting access to an application server in order to carry out a processing transaction, the proxy server “hashes” the message (by addressing it) to a “destination” proxy server. If the destination proxy server is not the one that initially received the message then it is forwarded to the original (destination) proxy server, which (locally) maintains processing “state” information for a sufficient period of time to determine if the message is a retransmission (and if so the destination proxy server can make the same processing decisions as were made for the original message). For example, the destination proxy server can identify (or “remember”) the application server to which the original message was addressed (or “routed”) in order to route retransmitted message(s) to the same application server. Each proxy server maintains such status information to indicate the processing decisions made for all messages it has handled (including identification of the application server to which a message has been routed) for a designated (i.e., maximum possible) interval of time between message retransmissions (referred to as MAX_STATE) which is adjusted to account for potential network transmission delays. This approach allows a proxy server to make consistent decisions for message (re)transmissions in a dynamically changing client/server network processing environment.


The present invention provides for dynamic client/server network management using proxy servers, by allowing each active proxy server in an arrayed cluster to maintain an updated list of all other operating proxy servers in the cluster.


The present invention provides a method and system that allows a user client to send a message requesting access to an application server via a clustered proxy server by using a proxy server to provide a message hash identifying the proxy server maintaining state information for that message.


The subject matter which is regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, together with further objects and advantages thereof, may best be understood by reference to the following description taken in conjunction with the accompanying drawings.





BRIEF DESCRIPTION OF THE DETAILED DRAWINGS


FIG. 1 illustrates a diagram outlining operation of a client/server network according to the invention.



FIG. 2 illustrates a flowchart outlining operation of a client/server network according to the invention.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS


FIG. 1 illustrates a diagram outlining the preferred operation of a client/server computer network utilizing SIP/UDP (Session Initiation Protocol over User Datagram Protocol) as the transmission protocol so that the illustrated connection topology can be used for transmitting proxied messages (containing data and/or instructions for a transaction to be processed) between a client and an application server located on the network; i.e., where a user client 1/2/3 sends a message to one of a cluster of proxy servers 11/22/33 that route the message to one of a cluster of application servers 111/222/333. In the example shown in FIG. 1 (a) message M1 is directly received from client 1 by proxy server 22; and (b) message RM1 (a retransmission of M1) is received by proxy server 11 and forwarded to proxy server 22 and then (in either case) the message is routed (i.e. load balanced) to application server 111.


As shown in the flowchart of FIG. 2 at step (i), when a (dialog-initiating) client message M1/RM1 arrives at a proxy server 11 using SIP/UDP (or a similar) transmission protocol, the <message call ID> is preferably “hashed” and “modulized” (i.e., the remainder of dividing the hash value by the length of an array of choices is used) with each of the active ProxyClusterArray instances to determine the potential list of proxy servers 22 and/or 33 (each referred to as a “PotentialProxy”) that may have processing state information relating to messag(es) with that call ID as shown in FIG. 2 step (ii). If the currently active proxy server 11 is in the list of PotentialProxies, its local storage cache is checked for processing state information related to the message to determine if it is being retransmitted, and (if found) the message is processed locally by forwarding it directly to the application server 111 to which the original message was routed. Otherwise, the retransmitted message RM1 is forwarded to a PotentialProxy server that has not yet been queried (as determined from a list of “already visited” PotentialProxies kept in a private message header) until the proxy server 22 responsible for originally processing (and thus maintaining) the state information for that message is located and used for (re)routing the message to the application server 111 to which the original message M1 was routed (whereupon the private header is removed) as shown in FIG. 2 step (iii). If no state information for the message is held by any of the proxy servers in the list of PotentialProxies, the currently active proxy server 11 determined by the latest ProxyClusterArray creates state information and directly processes the message using the call ID (as described previously and as shown in FIG. 2 step (iv)). This creates the unlikely possibility that a message may pass through the same proxy server twice.


Whenever a new proxy server 33 is activated (or deactivated) a new ProxyClusterArray is activated (and becomes the “latest”) and the (previously latest) array is then considered to “expire” (after the next MAX_STATE interval). Thus a message may be forwarded more than once between proxy servers for examination to determine if it is being retransmitted. However, in the normal case there is only a single listed server in the ProxyClusterArray to consider, and (N−1)/N (where N is the number of proxy servers) of the client messages will be forwarded once to another proxy server. In a preferred example of use of the invention with WebSphere, an “HAGroup” (High Availability Group) processing mechanism (based on “Virtual Synchrony” technology) is used to track the list(s) of active and expiring ProxyClusterArrays. When activated, each proxy server joins an HAGroup that corresponds to the cluster of which it is a member, so that Virtual Synchrony can ensure that each active proxy server is provided with the same updated list(s) of ProxyClusterArrays. If a proxy server purposefully stops its processing activity, it broadcasts its state information to the other proxy servers listed in the (currently active) ProxyClusterArray; however, if a proxy server terminates operation abnormally, the state information it possesses is lost. (It is possible to handle such a condition by replicating this state information, although this entails a processing performance cost.)


While certain preferred features of the invention have been shown by way of illustration, many modifications and changes can be made that fall within the true spirit of the invention as embodied in the following claims, which are to be interpreted as broadly as the law permits to cover the full scope of the invention, including all equivalents thereto.

Claims
  • 1. A computer system comprised of at least the following components (a). two or more proxy servers configured in an arrayed cluster for routing a transmitted message over a network from a client to an application server selected by the proxy server cluster;wherein each active proxy server maintains an updated list of all other active proxy servers in the proxy cluster array to allow a message to be (re)transmitted across the proxy cluster array for delivery to the correct application server.
  • 2. The computer system of claim 1 wherein a routing proxy server temporarily stores processing state information relating to a routed message.
  • 3. The computer system of claim 2 wherein the message is hashed to determine the proxy server(s) storing temporary state information for the message.
  • 4. The computer system of claim 2 wherein a (re)transmitted message is forwarded to one or more proxy servers in the cluster to identify a proxy server holding state information for the message.
  • 5. The computer system of claim 4 wherein a (re)transmitted message is routed to a proxy server holding state information for the message.
  • 6. The computer system of claim 2 wherein message state information is maintained for a designated interval of time between message retransmissions.
  • 7. The computer system of claim 6 wherein a proxy cluster array list is updated when a new proxy server is activated or deactivated and the previous array list expires during the next designated interval.
  • 8. The computer system of claim 2 wherein a deactivating proxy server broadcasts state information to the other active proxy servers in the proxy cluster array.
  • 9. The computer system of claim 1 wherein the network uses Session Initiation Protocol over User Datagram Protocol (SIP/UDP) for transmitting messages.
  • 10. A method of using a computer system comprised of at least the following steps carried out by the following components: (a). configuring two or more proxy servers in an arrayed cluster for routing a transmitted message over a network from a client to an application server selected by the proxy server cluster;wherein each active proxy server maintains an updated list of all other active proxy servers in the proxy cluster array to allow a message to be (re)transmitted across the proxy cluster array for delivery to the correct application server.
  • 11. The method of claim 10 wherein a routing proxy server temporarily stores processing state information relating to a routed message.
  • 12. The method of claim 11 wherein the message is hashed to determine the proxy server(s) storing temporary state information for the message.
  • 13. The method of claim 11 wherein a (re)transmitted message is forwarded to one or more proxy servers in the cluster to identify a proxy server holding state information for the message.
  • 14. The method of claim 13 wherein a (re)transmitted message is routed to a proxy server holding state information for the message.
  • 15. The method of claim 11 wherein message state information is maintained for a designated interval of time between message retransmissions.
  • 16. The method of claim 15 wherein a proxy cluster array list is updated when a new proxy server is activated or deactivated and the previous array list expires during the next designated interval.
  • 17. The method of claim 11 wherein a deactivating proxy server broadcasts state information to the other active proxy servers in the proxy cluster array.
  • 18. The method of claim 10 wherein the network uses Session Initiation Protocol over User Datagram Protocol (SIP/UDP) for transmitting messages.