A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
The present invention is generally related to computer systems and software such as middleware, and is particularly related to transactional middleware.
A transactional middleware system, or transaction oriented middleware, includes enterprise application servers that can process various transactions within an organization. With the developments in new technologies such as high performance network and multiprocessor computers, there is a need to further improve the performance of transactional middleware. These are the generally areas that embodiments of the invention are intended to address.
Systems and methods are provided for supporting intra-node communication based on a shared memory queue. A transactional middleware machine can provide a complex structure with a plurality of blocks in the shared memory, wherein the shared memory is associated with one or more communication peers, and wherein the communication peers include a sender and a receiver of a message that includes the complex structure. Furthermore, the sender can link a head block of the complex structure to a shared memory queue associated with the receiver, wherein the head block is selected from the plurality of blocks in the complex structure. Then, the receiver can access the complex structure based on the head block of the complex structure.
Other objects and advantages of the present invention will become apparent to those skilled in the art from the following detailed description of the various embodiments, when read in light of the accompanying drawings.
The invention is illustrated, by way of example and not by way of limitation, in the figures of the accompanying drawings in which like references indicate similar elements. It should be noted that references to “an” or “one” or “some” embodiment(s) in this disclosure are not necessarily to the same embodiment, and such references mean at least one.
The description of the invention as following uses the Tuxedo environment as an example for a transactional middleware machine environment. It will be apparent to those skilled in the art that other types of transactional middleware machine environments can be used without limitation.
Described herein are systems and methods that can support intra-node communication based on a shared memory.
Inter-Process Communication Message Queue (IPCQ)
For example, the application client 101 can be associated with a reply (RP) IPCQ 103, while the application server 102 can be associated with a request (RQ) IPCQ 104. In Tuxedo, the inter-process communication message queue (IPCQ) can be either a System V IPC message queue or a remote direct memory access (RDMA) message queue (MSGQ).
Furthermore, in order to transmit a single message, the inter-process communication message queue (IPCQ) may need to use at least two copies of the single message, such as:
Similarly, a message round trip between the application client 101 and the application server 102 may involve at least four copies of the message:
Thus, the performance of intra-node messaging of the system may be restricted, both in terms of resource usage and in terms of message processing time, due to the need for handling multiple copies of the same message, especially when the message involves large message buffers.
Shared Memory Queue (SHMQ)
In accordance with an embodiment of the invention, a shared memory queue (SHMQ) can be used for local messaging in a transactional middleware machine environment, e.g. enhancing native tpcall performance in Tuxedo.
The transactional middleware machine 210 can include communication peers, such as an application client 201 and an application server 202, each of which can use a shared memory queue (SHMQ). For example, the application client 201 can be associated with a reply (RP) SHMQ 203, and the application server 202 can be associated with a request (RQ) SHMQ 204. Both the reply (RP) SHMQ 203 and the request (RQ) SHMQ 204 can reside in the shared memory 220, which is attached with the communicating peers 201 and 202.
A message can be allocated in the shared memory 220 using a message buffer 205. Furthermore, the sending of the message can be implemented by linking the message buffer 205 to a shared memory queue (SHMQ), e.g. the request (RQ) SHMQ 204, and the receiving of the message can be implemented by delinking message buffer 205 from the shared memory queue (SHMQ), e.g. the request (RQ) SHMQ 204. Thus, the transmission of the message between the communicating peers 201-202 can require no physical copy.
After receiving the message in the message buffer 205, the server can modify it, and can send it to the client 201 in by linking the message buffer 205 to the reply (RP) SHMQ 203. Again, the receiving of the message can be implemented by delinking the message buffer 205 from the RP SHMQ 203. Thus, the transmission requires no physical copy of the message.
As shown in
Thus, using the shared memory queue (SHMQ), a message round trip between the communicating peers, e.g. the application client 201 and the application server 202, can involve zero copy of the message.
In accordance with an embodiment of the invention, each SHMQ can be bounded with an inter-process communication message queue (IPCQ). The IPCQ can accept both the shared memory messages (SHMMSGs) and local memory messages, while SHMQ may only accept the shared memory messages (SHMMSGs).
Using the shared memory queue (SHMQ) feature, all message buffers can be centralized in the shared memory 220 instead of in the local memory of each process. In order to ensure the stability and high performance of the system, the shared memory queue (SHMQ) can recycle message buffers from the dead (or terminated) applications, and to fail over to local memory buffers when SHMMSG buffers are exhausted.
Furthermore, the IPCQ can work as a notification channel for the SHMQ. For example, a sender can use the IPCQ to send a short message between the communication peers for coordinating the transmission of the shared memory messages (SHMMSGs). Additionally, the IPCQ can also act as a backup queue when SHMQ fails. Also, by bounding an IPCQ to a SHMQ, other features such as message queue (MSGQ), multi-thread server and restartable server, which may be available for the IPCQ, can be easily applied to the SHMQ.
In the example of Tuxedo, each Tuxedo server can have a request (RQ) SHMQ, and each Tuxedo client can have at least a reply (RP) SHMQ. These shared memory queues (SHMQs) can be assigned to the Tuxedo application after the bulletin board (BB) is attached.
The SHMQ section 302 can store an array of shared memory queue (SHMQ) head structures, e.g.
The SHMMSG section 303 includes one or more message lists, e.g.
Additionally, the SHMQHASH section 304 contains an array of indices that can be used for quickly finding a shared memory queue (SHMQ) in the shared memory 301. Each index can be used as a key in a hash table for finding the address of a queue head structure for a shared memory queue (SHMQ) in the shared memory 301.
The
Additionally, each message within the message list, e.g. SHMMSGLIST 401, can include a unified message header and a message body with a fixed size. Furthermore, there can be guard pages wrapping each message body in order to prevent accidental write accesses from corrupting the entire shared memory. For example,
Each shared memory queue (SHMQ) 501-502 can include a queue head structure in the
For example, the shared memory queue (SHMQ) 501 can include a queue head structure,
Additionally, the shared memory queue (SHMQ), e.g. 501 or 502, can include a linked-list of shared memory messages (SHMMSG). Each message head holds at least two links: a self link that recursively points to itself, and a next link that points to the next message in the shared memory queue (SHMQ) 501.
For example, the
Furthermore, the first message in the shared memory queue (SHMQ) 501 can be a dummy (or blank) message, and the tail link can point to the dummy message when the shared memory queue (SHMQ) 501 is empty. This dummy (or blank) head can make queuing logic simple, since adding a new message to the shared memory queue (SHMQ) 501 can be implemented by linking the new message to the next link in the message head of the last message, to which the tail link points, regardless of whether the shared memory queue (SHMQ) 501 is empty or not.
Each shared memory message (SHMMSG) 503-506 can be a pre-allocated buffer in the shared memory 500. Each process attached to the shared memory can have an individual shared memory address in its own address space. Thus, the pointers to the shared memory message (SHMMSG) 503-506 may not be used directly in an inter-process situation.
In order to access a shared memory message (SHMMSG), a process can hold an array of pointers to the various shared memory message lists. Each shared memory message (SHMMSG) can be addressed using a shared memory message (SHMMSG) list address and an index within the shared memory message (SHMMSG) list. Thus, an address for a shared memory message (SHMMSG) can include two leveled indices: one link to a shared memory message list and another link to the shared memory message (SHMMSG) within the list.
In the example of Tuxedo, during system initialization, a number of message lists can be created in a local shared memory (e.g. the local BB), with each message list specified with a particular buffer size and message count. Additionally, a number of SHMQ heads can be created in the local BB according to user configuration. Tuxedo applications can allocate message buffers by calling tpalloc( )/tprealloc( ) after being attached to the local BB. Additionally, the tpalloc( )/tprealloc( ) calls can return buffer allocated in the local memory, before BB is attached or when shared memory buffer resource is exhausted.
Transmitting Complex Structures Using Traditional Message Queues
The traditional message queues (e.g. System V IPC queue) can only support the transmission of compact data block. The complex structures, such as linked-list, cannot be transmitted directly as a message, but are require to be converted first into a continuous block. Furthermore, using the traditional message queues, the pointers in a sender's address space may not be available in the receiver's address space, which also requires copying the pointed blocks into a compact block.
A complex structure 705, e.g. a Tuxedo message header, can include a plurality of blocks, e.g. blocks 711-713 in the memory 703. When serving in a process, the complex structure 705 can be in an edit mode for efficient accessing and editing. For example, the Tuxedo message header in edit mode can be a linked-list of distributed blocks. On the other hand, when communicating between processes, the complex structure 705 can be compacted into continuous blocks 721-723, which can be referred to as a storage mode 706.
Furthermore, in order to transmit a complex structure 705 from the application client 701 to the application server 702, the system can perform the following steps:
Thus, the transmitting of a complex structure 705, such as a Tuxedo message header, may require substantial data copy actions that can potentially cost considerable time in messaging.
Transmitting Complex Structures Using Shared Memory Queues (SHMQs)
In accordance with an embodiment of the invention, shared memory queues (SHMQs), can be used to transmit a complex structure in a transactional middleware environment. The complex structure such as a linked-list can be treated as a single message, as long as all blocks in the complex structure reside in the shared memory segment attached by the communication peers.
Furthermore, each of the application client 801 and the application server 802 can have its own address space 831 or 832 for accessing the shared memory 820. The application client 801 can use a shared memory queue, e.g. SHMQ 811, and the application server 802 can use a shared memory queue, e.g. SHMQ 812, in order to support intra-node messaging.
As shown in
Initially, the complex structure 821 can be addressed by a sender, e.g. the application client 801, in its address space 831. In order to send the complex structure 821 as a single message, the sender, e.g. the application client 801, can hook the head block 822 to the SHMQ 812 of the receiver, e.g. the application server 802.
For example, the application client 801 can send a notification message to the application server 802, which informs the application server 802 that a shared memory message has been placed on the shared memory queue 812. The notification message can be implemented using the inter-process communication (IPC) queues. Additionally, the address of the head block 822 in the shared memory 820 can be sent via a short message using the IPC queue.
After receiving the link to the head block 822 on the shared memory queue 812, the application server 802 can adjust the pointers in the complex structure 821 to its own address space 832 automatically, based on the fact that shared memory pointer has the same offset from the shared memory beginning for all processes attached to the shared memory segment. Thus, the application server 802 can efficiently access and write to the complex structure 821 without a need to change the internal structure of the complex structure 821.
Then, the application server 802 may proceed to process the message, modify it, and send a reply message to the application client 801 either using the same complex structure 820 or using another complex structure. In either case, the application server 802 can link the return message or complex structure to the shared memory queue 811 on the application client 801 side.
Thus, using SHMQ, a complex structure 820 such as the Tuxedo message headers in edit mode can be communicated directly between the processes with no transformation needed, and considerable cost of copy actions can be saved.
The present invention may be conveniently implemented using one or more conventional general purpose or specialized digital computer, computing device, machine, or microprocessor, including one or more processors, memory and/or computer readable storage media programmed according to the teachings of the present disclosure. Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those skilled in the software art.
In some embodiments, the present invention includes a computer program product which is a storage medium or computer readable medium (media) having instructions stored thereon/in which can be used to program a computer to perform any of the processes of the present invention. The storage medium can include, but is not limited to, any type of disk including floppy disks, optical discs, DVD, CD-ROMs, microdrive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data.
The foregoing description of the present invention has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations will be apparent to the practitioner skilled in the art. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, thereby enabling others skilled in the art to understand the invention for various embodiments and with various modifications that are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the following claims and their equivalence.
This application claims priority to U.S. Provisional Patent Application No. 61/612,144, entitled “SYSTEM AND METHOD FOR PROVIDING DISTRIBUTED TRANSACTION PROCESSOR DATABASE AFFINITY AND DISTRIBUTED TRANSACTION PROCESS OPTIMIZATION,” by Little, et al., filed Mar. 16, 2012, which application is herein incorporated by reference. This application is related to the following patent applications, each of which is hereby incorporated by reference in its entirety: U.S. patent application Ser. No. 13/804,414, entitled “SYSTEM AND METHOD FOR SUPPORTING BUFFER ALLOCATION IN A SHARED MEMORY QUEUE”, by Lv, et al., filed Mar. 14, 2013; and U.S. patent application Ser. No. 13/804,687 entitled SYSTEM AND METHOD FOR SUPPORTING INTRA-NODE COMMUNICATION BASED ON A SHARED MEMORY QUEUE”, by Lv, et al., filed Mar. 14, 2013.
Number | Name | Date | Kind |
---|---|---|---|
5261089 | Coleman | Nov 1993 | A |
5452445 | Hallmark | Sep 1995 | A |
5555396 | Alferness | Sep 1996 | A |
5617537 | Yamada | Apr 1997 | A |
6070202 | Minkoff | May 2000 | A |
6154847 | Schofield | Nov 2000 | A |
6470342 | Gondi | Oct 2002 | B1 |
6629153 | Gupta | Sep 2003 | B1 |
6754842 | Kettley | Jun 2004 | B2 |
7103597 | McGoveran | Sep 2006 | B2 |
7380155 | Fung | May 2008 | B2 |
7430740 | Molloy | Sep 2008 | B1 |
7694178 | Hobson | Apr 2010 | B2 |
7743036 | Cotner | Jun 2010 | B2 |
7822727 | Shaughnessy | Oct 2010 | B1 |
7913261 | Mitchell et al. | Mar 2011 | B2 |
7970737 | Parkinson | Jun 2011 | B2 |
8868506 | Bhargava | Oct 2014 | B1 |
9146944 | Parkinson | Sep 2015 | B2 |
20010047436 | Sexton | Nov 2001 | A1 |
20020023129 | Hsiao | Feb 2002 | A1 |
20020116568 | Oksanen | Aug 2002 | A1 |
20020144006 | Cranston et al. | Oct 2002 | A1 |
20030005172 | Chessell | Jan 2003 | A1 |
20030035372 | Schaub | Feb 2003 | A1 |
20030154423 | Egolf et al. | Aug 2003 | A1 |
20040015079 | Berger et al. | Jan 2004 | A1 |
20040123293 | Johnson | Jun 2004 | A1 |
20040153349 | K. et al. | Aug 2004 | A1 |
20040153450 | K et al. | Aug 2004 | A1 |
20040158549 | Matena | Aug 2004 | A1 |
20050044551 | Sodhi | Feb 2005 | A1 |
20050144171 | Robinson | Jun 2005 | A1 |
20050144299 | Blevins et al. | Jun 2005 | A1 |
20050262055 | Newport | Nov 2005 | A1 |
20060075277 | Johnson | Apr 2006 | A1 |
20060080668 | Blackmore et al. | Apr 2006 | A1 |
20060136887 | Kaczynski | Jun 2006 | A1 |
20060149791 | Sinha | Jul 2006 | A1 |
20060179125 | Pavlik | Aug 2006 | A1 |
20060235853 | Luo | Oct 2006 | A1 |
20070041392 | Kunze | Feb 2007 | A1 |
20070079077 | Baines | Apr 2007 | A1 |
20070156729 | Shaylor | Jul 2007 | A1 |
20070165625 | Eisner | Jul 2007 | A1 |
20080127219 | Lacombe | May 2008 | A1 |
20080147945 | Zimmer | Jun 2008 | A1 |
20080177955 | Su | Jul 2008 | A1 |
20080243865 | Hu | Oct 2008 | A1 |
20080250074 | Parkinson | Oct 2008 | A1 |
20090070330 | Hwang | Mar 2009 | A1 |
20090158397 | Herzog | Jun 2009 | A1 |
20090172153 | Cohen | Jul 2009 | A1 |
20090248765 | Akidau | Oct 2009 | A1 |
20090292744 | Matsumura | Nov 2009 | A1 |
20100042999 | Dorai | Feb 2010 | A1 |
20100169284 | Walter | Jul 2010 | A1 |
20100198920 | Wong | Aug 2010 | A1 |
20110055313 | Little | Mar 2011 | A1 |
20110087633 | Kreuder | Apr 2011 | A1 |
20110145204 | Maple | Jun 2011 | A1 |
20120084274 | Renkes | Apr 2012 | A1 |
20120131285 | Leshchiner | May 2012 | A1 |
20120166889 | El-Kersh | Jun 2012 | A1 |
20120210094 | Blocksome et al. | Aug 2012 | A1 |
20130066949 | Colrain | Mar 2013 | A1 |
Number | Date | Country | |
---|---|---|---|
20130246554 A1 | Sep 2013 | US |
Number | Date | Country | |
---|---|---|---|
61612144 | Mar 2012 | US |