System and method for a pluggable protocol handler

Information

  • Patent Grant
  • 7966412
  • Patent Number
    7,966,412
  • Date Filed
    Tuesday, July 19, 2005
    19 years ago
  • Date Issued
    Tuesday, June 21, 2011
    13 years ago
Abstract
Embodiments of the invention are generally directed to a system and method for pluggable protocol handlers to route message traffic between communication partners. In an embodiment, a protocol independent connection manager receives a message from a communication partner over a network connection. The connection manager calls a dynamically loadable protocol handler to process and route the received message to a subsequent communication partner. In one embodiment, the network connection is a multiplexed network connection.
Description
FIELD OF INVENTION

The field of invention pertains generally to the software arts; and, more specifically to a system and method for a pluggable protocol handler.


BACKGROUND

Even though standards-based application software (e.g., Java™ based application software) has the potential to offer true competition at the software supplier level, legacy proprietary software has proven reliability, functionality and integration into customer information systems (IS) infrastructures. Customers are therefore placing operational dependency on standards-based software technologies with caution. Not surprisingly, present day application software servers tend to include instances of both standard and proprietary software suites, and, often, “problems” emerge in the operation of the newer standards-based software, or interoperation and integration of the same with legacy software applications.


The prior art application server 100 depicted in FIG. 1 provides a good example. FIG. 1 shows a prior art application server 100 having both an Advanced Business Application Programming™ (ABAP) legacy/proprietary software suite 103 and a Java 2 Enterprise Edition™ (J2EE) standards-based software suite 104. A connection manager 102 routes requests (e.g., HyperText Transfer Protocol (HTTP) requests and HTTP with secure socket layer (HTTPS) requests) associated with “sessions” between server 100 and numerous clients (not shown in FIG. 1) conducted over a network 101. A “session” can be viewed as the back and forth communication over a network 101 between computing systems (e.g., a particular client and the server).


The back and forth communication typically involves a client (“client”) sending a server 100 (“server”) a “request” that the server 100 interprets into some action to be performed by the server 100. The server 100 then performs the action and if appropriate returns a “response” to the client (e.g., a result of the action). Often, a session will involve multiple, perhaps many, requests and responses. A single session through its multiple requests may invoke different application software programs.


For each client request that is received by the application server's connection manager 102, the connection manager 102 decides to which software suite 103, 104 the request is to be forwarded. If the request is to be forwarded to the proprietary software suite 103, notification of the request is sent to a proprietary dispatcher 105, and, the request itself is forwarded into a request/response shared memory 106. The proprietary dispatcher 105 acts as a load balancer that decides which one of multiple proprietary worker nodes 1071 through 107L are to actually handle the request.


A worker node is a focal point for the performance of work. In the context of an application server that responds to client-server session requests, a worker node is a focal point for executing application software and/or issuing application software code for downloading to the client. The term “working process” generally means an operating system (OS) process that is used for the performance of work and is also understood to be a type of worker node. For convenience, the term “worker node” is used throughout the present discussion.


When the dispatcher 105 identifies a particular proprietary worker node for handling the aforementioned request, the request is transferred from the request/response shared memory 106 to the identified worker node. The identified worker node processes the request and writes the response to the request into the request/response shared memory 106. The response is then transferred from the request/response shared memory 106 to the connection manager 102. The connection manager 102 sends the response to the client via network 101.


Note that the request/response shared memory 106 is a memory resource that each of worker nodes 1071 through 107L has access to (as such, it is a “shared” memory resource). For any request written into the request/response shared memory 106 by the connection manager 102, the same request can be retrieved by any of worker nodes 1071 through 107L. Likewise, any of worker nodes 1071 through 107L can write a response into the request/response shared memory 106 that can later be retrieved by the connection manager 102. Thus the request/response shared memory 106 provides for the efficient transfer of request/response data between the connection manager 102 and the multiple proprietary worker nodes 1071 through 107L.


If the request is to be forwarded to the standards-based software suite 104, notification of the request is sent to the dispatcher 108 that is associated with the standards-based software suite 104. As observed in FIG. 1, the standards-based software suite 104 is a Java based software suite (in particular, a J2EE suite) that includes multiple worker nodes 1091 through 109N.


A Java Virtual Machine is associated with each worker node for executing the worker node's abstract application software code. For each request, dispatcher 108 decides which one of the N worker nodes is best able to handle the request (e.g., through a load balancing algorithm). Because no shared memory structure exists within the standards-based software suite 104 for transferring client session information between the connection manager 102 and the worker nodes 1091 through 109N, separate internal connections have to be established to send both notification of the request and the request itself to the dispatcher 108 from connection manager 102 for each worker node. The dispatcher 108 then forwards each request to its proper worker node.


Various problems exist with respect to the prior art application server 100 of FIG. 1. For example, the establishment of connections between the connection manager and the J2EE dispatcher to process a client session adds overhead/inefficiency within the standards-based software suite 104. For example, establishing connections between the connection manager and the J2EE dispatcher typically include copying data to and from the network stack. In addition, the J2EE dispatcher opens a separate select thread for each connection. Opening a number of separate threads can add overhead to the system because each thread uses resources such as memory. This increased overhead reduces the efficiency and scalability of application server 100.


SUMMARY

Embodiments of the invention are generally directed to a system and method for pluggable protocol handlers to route message traffic between communication partners. In an embodiment, a protocol independent connection manager receives a message from a communication partner over a network connection. In another embodiment messages are received from a communication partner via a connection oriented shared memory region. The connection manager calls a dynamically loadable protocol handler to process and route the received message to a subsequent communication partner. In one embodiment, the network connection is a multiplexed network connection.





FIGURES

The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:



FIG. 1 shows a prior art application server;



FIG. 2 shows an improved application server;



FIG. 3 is a high-level block diagram of an application server illustrating the use of pluggable protocol handlers according to an embodiment of the invention;



FIG. 4 shows selected portions of one example of an IIOP protocol header;



FIG. 5 shows selected portions of one example of a P4 protocol header;



FIG. 6 is a block diagram illustrating selected aspects of an embodiment in which internal communication partners use a shared memory to communicate with a connection manager;



FIG. 7 is a conceptual diagram illustrating a pluggable protocol handler implemented according to an embodiment of the invention;



FIG. 8 is a flow diagram illustrating selected aspects of a pluggable protocol handler according to an embodiment of the invention; and



FIG. 9 shows a depiction of a computing system.





DETAILED DESCRIPTION
Overview


FIG. 2 shows the architecture of an improved application server in accordance with embodiments of the invention.


Comparing FIGS. 1 and 2, first, note that the role of the connection manager 202 has been enhanced to at least perform dispatching 208 for the standards-based software suite 204 (so as to remove the additional connection overhead associated with the prior art system's standards-based software suite dispatching procedures).


Also, the connection manager is protocol independent. A protocol handler can be plugged into the connection manager to support any one of a number of protocols by which a request can be conveyed to the connection manager. For example, handlers for protocols such as the hypertext transfer protocol (HTTP), secure HTTP (HTTPS), the simple mail transfer protocol (SMTP), the network news transfer protocol (NNTP), the TELNET protocol, the P4 protocol of SAP AG, and the Internet Inter-Object Request Broker Protocol (IIOP) may be provided at the connection manager so that it can receive a request conveyed from a client in accordance with any of these protocols. The advantages of pluggable protocol handlers include: easy extendibility of connection managers with additional protocols; and small and independent software components that can be developed, tested, maintained, and replaced independently, e.g., in the case of an error in one protocol handler, only this part needs to be replaced, not the complete program.


In addition, in one embodiment, the role of a shared memory has been expanded to at least include: a) a first shared memory region 250 that supports request/response data transfers not only for the proprietary suite 203 but also the standards-based software suite 204; b) a second shared memory region 260 that stores session objects having “low level” session state information (e.g., information that pertains to a request's substantive response such as the identity of a specific servlet invoked through a particular web page); and, c) a third shared memory region 270 that stores “high level” session state information (e.g., information that pertains to the flow management of a request/response pair within the application server (e.g., the number of outstanding active requests for a session)).


Regarding request notification queues 212 Q1 through QM, one queue for each of the worker nodes 2091 through 209M has been implemented within the standards-based software suite 204. In an embodiment, the shared memory structures 250, 260, 270 and request notification queues 212 help implement a fast session fail over protection mechanism in which a session that is assigned to a first worker node can be readily transferred to a second worker node upon the failure of the first worker node.


Shared memory is memory whose stored content can be reached by multiple worker nodes. Here, the contents of the shared memory region 250 can be reached by each of worker nodes in 207 and 209. Additionally, the contents of shared memory regions 260 and 270 can be reached by each of worker nodes 2091 through 209M. In one embodiment, shared memory region 260 supports shared cache 206 that can be reached by each of worker nodes 2091 through 209M.


Different types of shared memory technologies may be utilized within the application server 200 and yet still be deemed as being a shared memory structure. For example, shared memory region 250 may be implemented within a “connection” oriented shared memory technology while shared memory region 260 may be implemented with a “shared closure” oriented shared memory technology.


The connection oriented request/response shared memory region 250 effectively implements a transport mechanism for request/response data between the connection manager and the worker nodes. That is, because the connection manager is communicatively coupled to the shared memory, and because the shared memory is accessible to each worker node, the request/response shared memory 250—at perhaps its broadest level of abstraction—is a mechanism for transporting request/response data between the connection manager and the applicable worker node(s) for normal operation of sessions (e.g., no worker node failure) as well as those sessions affected by a worker node crash.


Although the enhancements of the application server 200 of FIG. 2 have been directed to improving the reliability of a combined ABAP/J2EE application server, it is believed that architectural features and methodologies described in more detail further below can be more generally applied to various forms of computing systems that manage communicative sessions, whether or not such computing systems contain different types of application software suites, and whether any such application software suites are standards-based or proprietary. Moreover, it is believed that such architectural features and methodologies are generally applicable regardless of any particular type of shared memory technology employed.


In operation, the connection manager 202 forwards actual request data to the first shared memory region 250 (request/response shared memory 250) regardless of whether the request is to be processed by one of the proprietary worker nodes 207 or one of the standards-based worker nodes 209. Likewise, the connection manager 202 receives response data for a request from the request/response shared memory 250 whether a proprietary worker node or a standards-based worker node generates the response.


With the exception of having to share the request/response shared memory 250 with the worker nodes 209 of the standards-based software suite 204, the operation of the proprietary software suite 203 is essentially the same as that described in the background, in one embodiment of the invention. That is, the connection manager 202 forwards request notifications to the proprietary dispatcher 205 and forwards the actual requests to the request/response shared memory 250. The proprietary dispatcher 205 then identifies which one of the proprietary worker nodes 207 is to handle the request. The identified worker node subsequently retrieves the request from the request/response shared memory 250, processes the request and writes the response into the request/response shared memory 250. The response is then forwarded from the request/response shared memory 250 to the connection manager 202 who forwards the response to the client via network 201.


In an alternative embodiment, the ABAP dispatcher 205 is integrated into the connection manager, just as the J2EE dispatcher 208. Indeed, it is contemplated that a single dispatcher may encompass the functionality of both dispatchers 205 and 208. In the case where the dispatcher 205 is integrated into the connection manager 202, the connection manager identifies which one of the proprietary worker nodes 207 is to handle a request and via its integrated dispatcher capabilities, forwards the request to the request/response shared memory 250. The identified worker node subsequently retrieves the request from the request/response shared memory 250, processes the request and writes the response into the request/response shared memory 250. The response is then forwarded from the request/response shared memory 250 to the connection manager 202 who forwards the response to the client via network 201.


Pluggable Protocol Handlers


FIG. 3 is a high-level block diagram of application server 300 illustrating the use of pluggable protocol handlers according to an embodiment of the invention. Application server 300 includes connection manager 310 and worker nodes 3401-340M. Connection manager 310 exchanges messages (e.g., messages 324-326) with standards-based protocol client 320 and proprietary protocol client 330 over network connections 322 and 332 respectively.


In one embodiment, network connections 322 and 332 are multiplexed network connections. A “multiplexed network connection” refers to a network connection in which the messages from more than one client are multiplexed over the same network connection. Since the connections are multiplexed, there is no need to set-up separate network connections for each client, in an embodiment of the invention. This reduces the network connection overhead for connection manager 310.


Connection manager 310 receives messages (e.g., 324 and 334) from one or more clients (clients 320 and 330) on a network connection endpoint (e.g., network connection endpoints 312 and 314). Network connection endpoints 312-314 (and also 316) are identifiable endpoints for a network connection such as the combination of a network layer address (e.g., an Internet Protocol address) and port number.


In one embodiment, connection manager 310 is protocol independent. In such an embodiment, connection manager 310 accesses one or more dynamically pluggable protocol handlers 360-362 through, for example, an Application Programming Interface (API) (e.g., APIs 364-368) to process the received messages. The term “dynamically” refers to pluggable protocol handlers that can be loaded (and removed) at runtime.


Pluggable protocol handlers 360-362 process the received messages 324 and 334 in accordance with the appropriate protocol to determine an appropriate communication partner. Determining an appropriate communication partner for a received message typically includes selecting one of worker nodes 3401 through 340M to process the message. Selecting the appropriate worker node can be based on a number of factors including session management information and/or load distribution schemes. The pluggable protocol handler forwards the received message (e.g., message 342) to the selected worker node via, for example, network connections 3481-348M or connection oriented shared memory.


After processing the message 342, the worker node may provide a response message 344. Connection manager 310 receives message 344 on network connection endpoint 316. If a session communication protocol is being used within the server, then connection manager 310 first processes the session information and then calls an appropriate pluggable communication protocol handler 360-362. The pluggable communication protocol handler 360-362 selects the appropriate client (e.g., standards-based protocol client 320 or propriety protocol client 330) and forwards the message to the selected client.


As shown in FIG. 3, connection manager 310 calls pluggable protocol handlers 360-362 to process messages from communication partners that are both external to application server 300 (e.g., clients 320-330) and internal to application server 300 (e.g., worker nodes 3401-340M). For ease of reference, the term “internal communication partners” refers to entities that are within the same application server (or cluster) as connection manager 310. Similarly, the term “external communication partners” refers to entities that are external to the application server in which connection manger 310 resides.


The messages sent by (and to) external communication partners (e.g., clients 320-330) are formatted according to either a standards-based protocol or a proprietary protocol. These messages typically have a protocol header and message data. For example, message 324 includes protocol header 370 and data 372. In an embodiment, pluggable communication protocol handlers 360-362 determine an appropriate communication partner for a message based, at least in part, on the protocol header (e.g., protocol header 370).


An example of a standards-based protocol used in an embodiment of the invention is IIOP. FIG. 4 shows selected portions of one example of an IIOP protocol header 400. IIOP protocol header 400 includes protocol magic 410, Global Inter-ORB Protocol (GIOP) version 420, flag 430, message type 440, and message size 450. Protocol magic 410 is a four-byte portion of the message header whose value is the four upper-case characters “GIOP.” GIOP version 420 is a two-byte element of the message header that specifies the version number of the GIOP protocol being used in the message. Flag 430 is an eight-bit octet to provide various flags for the message. Message type 440 specifies a message type for the message (e.g., request, reply, cancel request, locate request, locate reply, close connection, message error, fragment, etc.). Message size 450 specifies the size of the message that follows the message header.


An example of a proprietary protocol used in an embodiment of the invention is P4. FIG. 5 shows selected portions of one example of a P4 protocol header 500. P4 protocol header 500 includes version 510, size 520, server identifier 530, and broker identifier 540. Version 510 is a two-byte element of the message header species the version of P4 protocol for the message. Size 520 is a four-byte element that specifies the size of the message. Service identifier 530 is a four-byte element that specifies an identifier to uniquely identify a server, for example, within a cluster of servers. Broker identifier 540 is a four-byte field that can be used to identify a cluster in which the server resides.


Referring again to FIG. 3, in an embodiment, the communication between connection manager 310 and the internal communication partners is session based. In such an embodiment, session communication data 350 is prepended to messages between connection manager 310 and the internal communication partners. Session logic associated with the internal communication partners (3461-346M) and connection manager 310 (not shown) processes session communication data 350.



FIG. 6 is a block diagram illustrating selected aspects of an embodiment in which internal communication partners use a shared memory to communicate with a connection manager. Application server 600 includes connection manager 610 which is capable of exchanging messages with standards-based protocol client 620 and proprietary protocol client 626. Connection manager 610 uses pluggable protocol handlers to process these messages in substantially the same way as discussed above with reference to FIG. 3. Unlike the system shown in FIG. 3, however, connection manager 610 exchanges messages with worker nodes 6401-640M through shared memory 630.


Shared memory 630 is a memory resource that each of worker nodes 6401 through 640M can access to exchange messages and other information with connection manager 610. In the illustrated embodiment, shared memory 630 includes request/response shared memory 632 and request notification queues 634. Connection manager 610 writes request data into (and reads response data from) request/response shared memory 632. Any of worker nodes 6401-640M can retrieve the request data from request/response shared memory 632. Likewise, any of worker nodes 6401-640M can write a response into request/response shared memory 632 that can later be retrieved by connection manager 610.


Connection manager 610 uses request notification queues 634 to notify worker nodes 6401-640M that request data is available in request/response shared memory 632. In an embodiment, each of worker nodes 6401-640M has a separate request notification queue within request notification queues 634. In such an embodiment, connection manager 610 may store a handle in the appropriate queue of a worker node to notify the worker node that request data is available. The worker node can then retrieve the handle and use it to access request/response shared memory 632 and obtain the request data.



FIG. 7 is a conceptual diagram illustrating a pluggable protocol handler implemented according to an embodiment of the invention. In an embodiment, connection manager 702 uses multiplex select thread 704 to monitor a number of network connection endpoints (e.g., sockets such as Transmission Control Protocol/Internet Protocol sockets). A “multiplex select thread” refers to a function that determines the status of one or more sockets (and waits if necessary) to perform input/output. For each socket, a caller can request information on, for example, read or write status.


Over time, connection manager 702 calls pluggable protocol handler 706 to process messages implemented according to either a standards-based protocol (e.g., IIOP) or a proprietary protocol (e.g., P4). In an embodiment, pluggable protocol handler 706 (or, for ease of reference, handler 706) handles a connection oriented protocol on a multiplexed connection (e.g., multiplexed connections 322, 332, and 348, shown in FIG. 3). Handler 706 registers a number of callback functions at connection manager 702 to respond to various events including: new client connection available, connection to communication partner established, data available, write to connection possible, and connection closed. The registered function is called if the associated event is occurring on a connection. Table 1 illustrates selected functions for handler 706 according to an embodiment of the invention.












TABLE 1







Function
Event









Connect
Connect to a communication partner on




a multiplexed connection.



Accept
Accept incoming connection from




external or internal communication




partner (e.g. 620, 626).



Read
Read from a multiplexed connection.




In an embodiment network




fragmentation is supported. That is, for




each read operation, either a complete




message or a portion of the message




can be read.



Write
Write to a multiplexed connection. For




each write operation, either a complete




message or a portion of the message




can be written.



AllocBuf
Allocate a buffer for writing on a




multiplexed connection.



Close
Close a multiplexed connection.



GetInfo
Retrieve information about the




connection.



FindConn
Determine whether there is already a




connection on a specified network




connection endpoint (e.g., as specified




by a protocol, hostname, and port




number).










In an embodiment, PlugInInit function 708 is called after handler 706 is loaded into connection manager 702. PlugInInit function 708 provides handler specific initialization for handler 706. Initialization may include providing version information of connection manager 702, supported protocol information, hostname, port number, and the like.


Handler 706 registers one or more input/output functions with multiplex select thread 704. Multiplex select thread 704 calls these functions if an associated event occurs on a network connection. In one embodiment, the functions registered for handler 706 include read 710, accept 711, write 712, and connect 714.


In an embodiment, read 710 allows handler 706 to read from a multiplexed connection. Reading from the multiplexed connection may include reading an entire message or only reading a fragment of the message. If only a fragment of the message is read, then read 710 may be repeatedly called until the entire message has been read. Pseudo-code listing 1 illustrates read data processing, according to an embodiment of the invention. Pseudo-code listing 1 is directed to read data processing for data implemented according to the P4 protocol (implementing a state machine). It is to be appreciated that similar read data processing may be used for other protocols (either standards-based protocols or proprietary protocols).












LISTING 1

















state == read request:



    Read( ) -> buffer, buffer length



    if buffer length >= P4 header length (fixed size)



        process P4 header



        state = P4 body



    else



        store buffer locally



        store buffer length locally



        state = header fragmented



    end



state == header fragmented:



    Read( ) -> buffer, buffer length



    if length + stored length >= P4 header length



        state = P4 body



    else



        append buffer to stored buffer



        add buffer length to stored buffer length



    end



state == P4 body:



    Read( ) -> buffer, buffer length



    if buffer length >= P4 message length



        message is complete



        forward message to server node



        state = read request










In an embodiment, write 712 allows handler 706 to write data to a multiplexed connection. Writing to the multiplexed connection may include writing an entire message or only writing a fragment of the message. If only a fragment of the message is written, then write 712 may be repeatedly called until the entire message has been written.


Connect 714 allows handler 706 to connect to a communication partner on a multiplexed connection. The communication partner may be either an internal communication partner (e.g., worker nodes 3401-340M, shown in FIG. 3) or an external communication partner (e.g., clients 320 and 330, shown in FIG. 3). Accept 711 allows the connection manager 702 to accept new incoming connections from an external or internal client. Multiplex select thread 704 can either accept or reject the request to connect to a communication partner based, at least in part, on whether sufficient resources exist to form the connection.


Reference number 716 illustrates handler 706 requesting a connection with connect function 714. Similarly, reference number 718 illustrates handler 706 reading data from a multiplexed connection. Other handlers for other protocols (not shown) can access multiplex select thread 704 to perform similar input/output functions as shown by reference number 720.


Turning now to FIG. 8, the particular methods associated with embodiments of the invention are described in terms of computer software and hardware with reference to a flowchart. The methods to be performed by a computing device (e.g., an application server) may constitute state machines or computer programs made up of computer-executable instructions. The computer-executable instructions may be written in a computer programming language or may be embodied in firmware logic. If written in a programming language conforming to a recognized standard, such instructions can be executed on a variety of hardware platforms and for interface to a variety of operating systems. In addition, embodiments of the invention are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement embodiments of the invention as described herein. Furthermore, it is common in the art to speak of software, in one form or another (e.g., program, procedure, process, application, etc.), as taking an action or causing a result. Such expressions are merely a shorthand way of saying that execution of the software by a computing device causes the device to perform an action or produce a result.



FIG. 8 is a flow diagram illustrating selected aspects of a pluggable protocol handler according an embodiment of the invention. Referring to process block 810, one or more input/output functions of a dynamically loadable communication protocol handler (handler 706, shown in FIG. 7) are registered with a connection manager (e.g., connection manager 702, shown in FIG. 7). The input/output functions allow the connection manager to process messages between communication partners by calling the dynamically loadable communication protocol handler. In one embodiment, the input/output functions include a read function (e.g., read 710, shown in FIG. 7), a write function (e.g., write 712, shown in FIG. 7), an accept function (e.g. 711), and a connect function (e.g., connect 714, shown in FIG. 7). In an alternative embodiment, more functions, fewer functions, and/or different functions may be used.


Referring to process block 820, the dynamically loadable protocol handler receives an indication that a network connection endpoint is available for an input/output function. The received indication may be, for example, a return from a select thread (e.g., multiplex select thread 704, shown in FIG. 7) indicating that a network connection is ready for an input/output function, or may be an indication on the connection oriented shared memory indicating that data is ready for processing.


Referring to process block 830, an input/output function of the dynamically loadable protocol handler is performed. Examples of input/output functions include read functions, write functions, and connect functions. In an embodiment, the fragmentation of network messages is supported. In such an embodiment, read/write functions may read/write either an entire message or only a portion of the message (e.g., a message fragment). The read/write function may be repeatedly called until the entire message is read from the connection or written to the connection. The process can be repeated as shown by 840.


Additional Comments

The architectures and methodologies discussed above may be implemented with various types of computing systems such as an application server that includes a Java 2 Enterprise Edition (“J2EE”) server that supports Enterprise Java Bean (“EJB”) components and EJB containers (at the business layer) and/or Servlets and Java Server Pages (“JSP”) (at the presentation layer). Of course, other embodiments may be implemented in the context of various different software platforms including, by way of example, Microsoft .NET, Windows/NT, Microsoft Transaction Server (MTS), the Advanced Business Application Programming (“ABAP”) platforms developed by SAP AG and comparable platforms.


Processes taught by the discussion above may be performed with program code such as machine-executable instructions which cause a machine (such as a “virtual machine”, a general-purpose processor disposed on a semiconductor chip or special-purpose processor disposed on a semiconductor chip) to perform certain functions. Alternatively, these functions may be performed by specific hardware components that contain hardwired logic for performing the functions, or by any combination of programmed computer components and custom hardware components.


An article of manufacture may be used to store program code. An article of manufacture that stores program code may be embodied as, but is not limited to, one or more memories (e.g., one or more flash memories, random access memories (static, dynamic or other)), optical disks, compact disks-read only memory (CD-ROMs), digital versatile/video disks (DVD ROMs), erasable programmable read-only memory (EPROMs), electrically erasable programmable read-only memory (EEPROMs), magnetic or optical cards or other type of computer-readable media suitable for storing electronic instructions.



FIG. 9 is a block diagram of a computing system 900 that can execute program code stored by an article of manufacture. It is important to recognize that the computing system block diagram of FIG. 9 is just one of various computing system architectures. The applicable article of manufacture may include one or more fixed components (such as a hard disk drive 902 or memory 905) and/or various movable components such as a CD ROM 903, a compact disc, a magnetic tape, etc. In order to execute the program code, typically instructions of the program code are loaded into the Random Access Memory (RAM) 905; and, the processing core 906 then executes the instructions. The processing core may include one or more processors and a memory controller function. A virtual machine or “interpreter” (e.g., a Java Virtual Machine) may run on top of the processing core (architecturally speaking) in order to convert abstract code (e.g., Java bytecode) into instructions that are understandable to the specific processor(s) of the processing core 906.


It is believed that processes taught by the discussion above can be practiced within various software environments such as, for example, object-oriented and non-object-oriented programming environments, Java based environments (such as a Java 2 Enterprise Edition (J2EE) environment or environments defined by other releases of the Java standard), or other environments (e.g., a .NET environment, a Windows/NT environment each provided by Microsoft Corporation).


In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

Claims
  • 1. An application server comprising: a protocol independent connection manager, the connection manager to receive messages in accordance with a plurality of protocols including a first protocol and a second protocol, the connection manager to receive a message from a first client on a first network connection of the connection manager, the connection manager utilizing the first protocol or the second protocol for reception of the received message;a plurality of dynamically loadable communication protocol handlers to handle messages, the plurality of dynamically loadable communication protocol handlers including a first dynamically loadable communication protocol handler to handle messages received via the first protocol and a second dynamically loadable communication protocol handler to handle messages received via the second protocol, wherein in response to receiving the message the connection manager is to choose the first dynamically loadable communication protocol handler or the second dynamically loadable communication protocol handler based one of the plurality of protocols utilized for reception of the received message;a computer memory, the computer memory including a shared memory region that can be accessed by each of a first plurality of worker nodes associated with the first protocol or a second plurality of worker nodes associated with the second protocol for a transfer of received messages; anda plurality of notification queues each to notify one of the first and second plurality of worker nodes that data is available in the shared memory region,wherein the chosen dynamically loadable communication protocol handler is to select a first worker node of the plurality of worker nodes associated with one of the plurality of protocols of the chosen dynamically loadable communication protocol handler to handle the received message and is to forward the received message to the first worker node via the shared memory, the first worker node to access the received message in the shared memory,wherein the chosen dynamically loadable communication protocol handler is loaded at runtime.
  • 2. The application server of claim 1, wherein the first network connection is a multiplexed network connection, the multiplexed network connection multiplexing messages for a plurality of clients, and wherein the connection manager is to monitor a status of a number of network connection endpoints.
  • 3. The application server of claim 2, wherein the chosen dynamically loadable communication protocol handler implements, at least in part, a standards-based communication protocol.
  • 4. The application server of claim 3, wherein the standards-based communication protocol is the Internet Inter-Object Request Broker Protocol.
  • 5. The application server of claim 2, wherein the chosen dynamically loadable communication protocol handler implements, at least in part, a proprietary communication protocol.
  • 6. The application server of claim 5, wherein the proprietary communication protocol is the proprietary P4 protocol of SAP AG.
  • 7. The application server of claim 2, wherein the connection manager is to receive a response message from the first worker node, and wherein the chosen dynamically loadable communication protocol handler is to select the first client as an appropriate client of the plurality of clients to receive the response.
  • 8. The application server of claim 7, wherein the chosen dynamically loadable communication protocol handler is to forward the response message to the first client, wherein the dynamically loadable communication protocol handler is an application programming interface to process and route messages.
  • 9. The application server of claim 2, wherein the multiplexed network connection is implemented with a connection oriented shared memory, wherein the connection oriented shared memory includes: a first shared memory region to store request and response data transfers for the first protocol;a second shared memory region to store session objects including data fields and procedures for the request and response data transfers; anda third shared memory region to store session state information relating to a flow management of the request and response data transfers.
  • 10. The application server of claim 1, wherein the plurality of notification queues enables the application server to transfer work among the first and second plurality of worker nodes for fault tolerance and load balancing.
  • 11. The application server of claim 10, wherein the application server to use the plurality of notification queues to implement fail protection mechanism by transferring a session from a first worker node to a second worker node on failure of the first worker node.
  • 12. The application server of claim 1, wherein at least one of the plurality of dynamically loadable communication protocol handlers is to register an input/output function with a multiplex select thread for a multiplexed connection to call the input/output function on occurrence of an event.
  • 13. The method of claim 12, wherein the function implements a multiplex select thread, and wherein the at least one of the number of network connection endpoints is a socket to a worker node of the first and second plurality of worker nodes.
  • 14. The application server of claim of 13, wherein the multiplex select thread determines to accept a connection request when resources are available for a connection.
  • 15. The application server of claim 14, wherein at least two of the plurality of dynamically loadable communication protocol handlers have access to the multiplex select thread.
  • 16. The application server of claim 1, wherein each of the plurality of dynamically loadable communication protocol handlers is registered with the protocol independent connection manager.
  • 17. A method comprising: receiving a message at a connection manager of an application server from a first client, the connection manager being independent of message protocols, the message being received in accordance with a first message protocol;dynamically loading a first dynamically loadable communication protocol handler in the connection manager, the first communication protocol handler and a second communication protocol handler being included in a plurality of communication protocol handlers, the first communication protocol handler being compatible with the first message protocol and the second communication protocol handler being compatible with a second message protocol;receiving at the first communication protocol handler an indication that a network connection endpoint is available for an input/output function for the received message;performing the input/output function of the first communication protocol handler responsive, at least in part, to receiving the indication that the network connection endpoint is available, wherein performing the input/output function includes selecting a first worker node of a first plurality of worker nodes associated with the first message protocol to handle the received message and forwarding the received message to the first worker node via a shared memory, the shared memory being accessible to the first plurality of worker nodes and to a second plurality of worker nodes associated with the second message protocol; andgenerating notifications at a plurality of notification queues, wherein each notification is to notify one of the first and second plurality of worker nodes that data is available in a shared memory regionwherein dynamically loading a first dynamically loadable communication protocol handler in the connection manager occurs at runtime.
  • 18. The method of claim 17, further comprising: registering one or more input/output functions of the first communication protocol handler with the connection manager, the one or more input/output functions to allow the connection manager to call the first communication protocol handler.
  • 19. The method of claim 17, wherein the network connection endpoint is a multiplexed network connection endpoint, and wherein the method further comprises monitoring a status of a number of network connection endpoints.
  • 20. The method of claim 17, wherein receiving at the first communication protocol handler the indication that the network connection endpoint is available for an input/output function comprises: receiving a handle from a select thread indicating that the network connection endpoint is available for an input/output function, andmonitoring a status of a number of network connection endpoints using a multiplex of select threads.
  • 21. The method of claim 20, wherein receiving the handle includes receiving a handle for a read function and further comprising: reading at least part of a message from the network connection endpoint.
  • 22. The method of claim 21, wherein reading at least part of the message from the network connection comprises: reading at least a part of a message header associated with the message; anddetermining the first worker node as a destination for the message based, at least in part, on the message header.
  • 23. The method of claim 22, wherein the message header is based, at least in part, on one of: a standards-based communication protocol; anda proprietary communication protocol.
  • 24. The method of claim 23, wherein the standards-based communication protocol is the Internet Inter-Object Request Broker Protocol (HOP).
  • 25. The method of claim 20, further comprising: receiving a request from a worker node of the first and second plurality of worker nodes; andaccessing one of the plurality of notification queues associated with the worker node.
  • 26. The method of claim 20, wherein monitoring the status further comprises accessing a function to determine the status of at least one of the number of network connection endpoints.
  • 27. The method of claim 17, further comprising: receiving a response message from the first worker node;performing one or more input/output functions of the first communication protocol handler, the functions including selecting the first client as a destination for the response message and forwarding the response message to the first client.
  • 28. An article of manufacturer comprising a non-transitory computer readable medium providing instructions that, when executed by a processor, cause the processor to: receive a message at a connection manager of an application server from a first client, the connection manager being independent of message protocols, the message being received in accordance with a first message protocol;dynamically load a first dynamically loadable communication protocol handler in the connection manager, the first communication protocol handler and a second communication protocol handler being included in a plurality of communication protocol handlers, the first communication protocol handler being compatible with the first message protocol and the second communication protocol handler being compatible with a second message protocol;receive at the first communication protocol handler an indication that a network connection endpoint is available for an input/output function for the received message;perform the input/output function of the first communication protocol handler responsive, at least in part, to receiving the indication that the network connection endpoint is available, wherein performing the input/output function includes selecting a first worker node of a first plurality of worker nodes associated with the first message protocol to handle the received message and forwarding the received message to the first worker node via a shared memory, the shared memory being accessible to the first plurality of worker nodes and to a second plurality of worker nodes associated with the second message protocol; andgenerate notifications at a plurality of notification queues, wherein each notification is to notify one of the first and second plurality of worker nodes that data is available in a shared memory regionwherein dynamically loading a first dynamically loadable communication protocol handler in the connection manager occurs at runtime.
  • 29. The article of manufacture of claim 28, wherein the computer readable medium provides further instructions that, when executed by the processor, cause the processor to: register one or more input/output functions of the first communication protocol handler with the connection manager, the one or more input/output functions to allow the connection manager to call the first communication protocol handler.
  • 30. The article of manufacture of claim 28, wherein the instructions that, when executed by the processor, cause the processor to receive at the first communication protocol handler the indication that the network connection endpoint is available for an input/output function cause the processor to: receive a handle from a select thread indicating that the network connection endpoint is available for an input/output function.
  • 31. The article of manufacture of claim 28, wherein the computer readable medium provides further instructions that, when executed by the processor, cause the processor to: receive a response message from the first worker node;perform one or more input/output functions of the first communication protocol handler, the functions including selecting the first client as a destination for the response message and forwarding the response message to the first client.
US Referenced Citations (209)
Number Name Date Kind
5331318 Montgomery Jul 1994 A
5553242 Russell et al. Sep 1996 A
5566302 Khalidi et al. Oct 1996 A
5566315 Milillo et al. Oct 1996 A
5590328 Seno et al. Dec 1996 A
5617570 Russell et al. Apr 1997 A
5692193 Jagannathan et al. Nov 1997 A
5710909 Brown et al. Jan 1998 A
5745778 Alfieri Apr 1998 A
5805790 Nota et al. Sep 1998 A
5809527 Cooper et al. Sep 1998 A
5926834 Carlson et al. Jul 1999 A
5944781 Murry Aug 1999 A
5961584 Wolf Oct 1999 A
5974566 Ault et al. Oct 1999 A
6038571 Numajiri et al. Mar 2000 A
6065006 deCarmo et al. May 2000 A
6075938 Bugnion et al. Jun 2000 A
6092171 Relph Jul 2000 A
6115712 Islam et al. Sep 2000 A
6115721 Nagy Sep 2000 A
6167423 Chopra et al. Dec 2000 A
6167449 Arnold et al. Dec 2000 A
6199179 Kauffman et al. Mar 2001 B1
6216212 Challenger et al. Apr 2001 B1
6256712 Challenger Jul 2001 B1
6272598 Arlitt et al. Aug 2001 B1
6292099 Tse et al. Sep 2001 B1
6295582 Spencer Sep 2001 B1
6336170 Dean et al. Jan 2002 B1
6356529 Zarom Mar 2002 B1
6356946 Clegg et al. Mar 2002 B1
6385643 Jacobs et al. May 2002 B1
6385653 Sitaraman et al. May 2002 B1
6412045 DeKoning et al. Jun 2002 B1
6415364 Bauman et al. Jul 2002 B1
6425057 Cherkasova et al. Jul 2002 B1
6438654 Elko et al. Aug 2002 B1
6446088 Vaduvur et al. Sep 2002 B1
6502148 Krum Dec 2002 B1
6519594 Li Feb 2003 B1
6523027 Underwood Feb 2003 B1
6539445 Krum Mar 2003 B1
6587937 Jensen et al. Jul 2003 B1
6591347 Tischler et al. Jul 2003 B2
6601112 O'Rourke et al. Jul 2003 B1
6615253 Bowman-Amuah Sep 2003 B1
6640244 Bowman-Amuah Oct 2003 B1
6651080 Liang et al. Nov 2003 B1
6687702 Vaitheeswaran et al. Feb 2004 B2
6728748 Mangipudi et al. Apr 2004 B1
6732237 Jacobs et al. May 2004 B1
6738977 Berry et al. May 2004 B1
6754662 Li Jun 2004 B1
6757708 Craig et al. Jun 2004 B1
6760911 Ye Jul 2004 B1
6766419 Zahir et al. Jul 2004 B1
6772409 Chawla et al. Aug 2004 B1
6799202 Hankinson et al. Sep 2004 B1
6829679 DeSota et al. Dec 2004 B2
6944711 Mogi et al. Sep 2005 B2
6970925 Springmeyer et al. Nov 2005 B1
6976090 Ben-Shaul et al. Dec 2005 B2
6990534 Mikhailov et al. Jan 2006 B2
6996679 Cargnoni et al. Feb 2006 B2
7013329 Paul et al. Mar 2006 B1
7024512 Franaszek et al. Apr 2006 B1
7051161 Dixit et al. May 2006 B2
7069271 Fadel et al. Jun 2006 B1
7089566 Johnson Aug 2006 B1
7096319 Mogi et al. Aug 2006 B2
7096418 Singhal et al. Aug 2006 B1
7111300 Salas et al. Sep 2006 B1
7127472 Enokida et al. Oct 2006 B1
7149741 Burkey et al. Dec 2006 B2
7165239 Hejlsberg et al. Jan 2007 B2
7174363 Goldstein et al. Feb 2007 B1
7177823 Lam et al. Feb 2007 B2
7191170 Ganguly et al. Mar 2007 B2
7296267 Cota-Robles et al. Nov 2007 B2
7305495 Carter Dec 2007 B2
7532571 Price et al. May 2009 B1
7539821 Petev et al. May 2009 B2
7552284 Petey et al. Jun 2009 B2
7840760 Petey et al. Nov 2010 B2
20010029520 Miyazaki Oct 2001 A1
20020046325 Cai et al. Apr 2002 A1
20020052914 Zalewski et al. May 2002 A1
20020073283 Lewis et al. Jun 2002 A1
20020083118 Sim Jun 2002 A1
20020087700 Chae Jul 2002 A1
20020093487 Rosenberg Jul 2002 A1
20020099691 Lore et al. Jul 2002 A1
20020099753 Hardin et al. Jul 2002 A1
20020133805 Pugh et al. Sep 2002 A1
20020147888 Trevathan Oct 2002 A1
20020156863 Peng Oct 2002 A1
20020169926 Pinckney et al. Nov 2002 A1
20020174097 Rusch et al. Nov 2002 A1
20020181307 Fifield et al. Dec 2002 A1
20020198923 Hayes Dec 2002 A1
20020198953 O'Rourke et al. Dec 2002 A1
20030014552 Vaitheeswaran et al. Jan 2003 A1
20030023827 Palanca et al. Jan 2003 A1
20030037148 Pedersen Feb 2003 A1
20030037178 Vessey et al. Feb 2003 A1
20030065711 Acharya et al. Apr 2003 A1
20030074580 Knouse et al. Apr 2003 A1
20030084248 Gaither et al. May 2003 A1
20030084251 Gaither et al. May 2003 A1
20030088604 Kuck et al. May 2003 A1
20030093420 Ramme May 2003 A1
20030093487 Czajkowski et al. May 2003 A1
20030097360 McGuire et al. May 2003 A1
20030105887 Cox et al. Jun 2003 A1
20030115190 Soderstrom et al. Jun 2003 A1
20030131010 Redpath Jul 2003 A1
20030191795 Bernardin et al. Oct 2003 A1
20030196136 Haynes et al. Oct 2003 A1
20030200526 Arcand Oct 2003 A1
20030208563 Acree et al. Nov 2003 A1
20030229529 Mui et al. Dec 2003 A1
20030229760 Doyle et al. Dec 2003 A1
20030236857 Takase et al. Dec 2003 A1
20040024610 Fradkov et al. Feb 2004 A1
20040024971 Bogin et al. Feb 2004 A1
20040045014 Radhakrishnan Mar 2004 A1
20040054725 Moller et al. Mar 2004 A1
20040054860 Dixit et al. Mar 2004 A1
20040088412 John et al. May 2004 A1
20040117441 Liu et al. Jun 2004 A1
20040128370 Kortright Jul 2004 A1
20040153509 Alcorn et al. Aug 2004 A1
20040167980 Doyle et al. Aug 2004 A1
20040168029 Civlin Aug 2004 A1
20040168031 Haskins Aug 2004 A1
20040187140 Aigner et al. Sep 2004 A1
20040205299 Bearden Oct 2004 A1
20040215883 Bamford et al. Oct 2004 A1
20040221285 Donovan et al. Nov 2004 A1
20040221294 Kalmuk et al. Nov 2004 A1
20040243709 Kalyanavarathan et al. Dec 2004 A1
20050021917 Mathur et al. Jan 2005 A1
20050027943 Steere et al. Feb 2005 A1
20050055686 Buban et al. Mar 2005 A1
20050071459 Costa-Requena et al. Mar 2005 A1
20050086237 Monnie et al. Apr 2005 A1
20050086656 Whitlock et al. Apr 2005 A1
20050086662 Monnie et al. Apr 2005 A1
20050091388 Kamboh et al. Apr 2005 A1
20050125503 Iyengar et al. Jun 2005 A1
20050131962 Deshpande Jun 2005 A1
20050138193 Encarnacion et al. Jun 2005 A1
20050154837 Keohane et al. Jul 2005 A1
20050160396 Chadzynski Jul 2005 A1
20050180429 Ghahremani et al. Aug 2005 A1
20050188068 Kilian Aug 2005 A1
20050198199 Dowling Sep 2005 A1
20050216502 Kaura et al. Sep 2005 A1
20050238047 Holland et al. Oct 2005 A1
20050262181 Schmidt et al. Nov 2005 A1
20050262493 Schmidt et al. Nov 2005 A1
20050262512 Schmidt et al. Nov 2005 A1
20050268294 Petev et al. Dec 2005 A1
20060036448 Haynie et al. Feb 2006 A1
20060059453 Kuck et al. Mar 2006 A1
20060064545 Wintergerst Mar 2006 A1
20060064549 Wintergerst Mar 2006 A1
20060069712 Anders et al. Mar 2006 A1
20060070051 Kuck et al. Mar 2006 A1
20060092165 Abdalla et al. May 2006 A1
20060094351 Nowak et al. May 2006 A1
20060129512 Braun et al. Jun 2006 A1
20060129546 Braun et al. Jun 2006 A1
20060129981 Dostert et al. Jun 2006 A1
20060130063 Kilian et al. Jun 2006 A1
20060136667 Shultz et al. Jun 2006 A1
20060143256 Galchev et al. Jun 2006 A1
20060143328 Fleischer et al. Jun 2006 A1
20060143389 Kilian et al. Jun 2006 A1
20060143392 Petev et al. Jun 2006 A1
20060143393 Petev Jun 2006 A1
20060143427 Marwinski et al. Jun 2006 A1
20060143608 Dostert et al. Jun 2006 A1
20060143609 Stanev Jun 2006 A1
20060143618 Fleischer et al. Jun 2006 A1
20060143619 Galchev et al. Jun 2006 A1
20060150197 Werner Jul 2006 A1
20060155867 Kilian et al. Jul 2006 A1
20060159197 Kraut et al. Jul 2006 A1
20060167980 Werner Jul 2006 A1
20060168646 Werner Jul 2006 A1
20060168846 Juan Aug 2006 A1
20060193318 Narasimhan et al. Aug 2006 A1
20060248140 Birenheide Nov 2006 A1
20060248177 Dostert et al. Nov 2006 A1
20060248234 Pope et al. Nov 2006 A1
20060248283 Galchev et al. Nov 2006 A1
20060253558 Acree et al. Nov 2006 A1
20060274064 Dougherty et al. Dec 2006 A1
20060282509 Kilian et al. Dec 2006 A1
20060294253 Linderman Dec 2006 A1
20070027877 Droshev et al. Feb 2007 A1
20070050768 Brown et al. Mar 2007 A1
20070055781 Fleischer et al. Mar 2007 A1
20070150586 Kilian et al. Jun 2007 A1
20070156869 Galchev et al. Jul 2007 A1
20070156907 Galchev et al. Jul 2007 A1
20090282196 Petev et al. Nov 2009 A1
Foreign Referenced Citations (7)
Number Date Country
0459931 Dec 1991 EP
1380941 Jan 2004 EP
1027796 Jun 2004 EP
WO0023898 Apr 2000 WO
WO-0142908 Jun 2001 WO
WO-03073204 Sep 2003 WO
WO-2004038586 May 2004 WO
Related Publications (1)
Number Date Country
20070067469 A1 Mar 2007 US