A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
This application is related to the following patent applications, each of which is hereby incorporated by reference in its entirety:
U.S. Patent Application entitled “SYSTEM AND METHOD FOR SUPPORTING COOPERATIVE CONCURRENCY IN A MIDDLEWARE MACHINE ENVIRONMENT”, application Ser. No. 13/781,475, filed Feb. 28, 2013, by inventor Oleksandr Otenko;
U.S. Patent Application entitled “SYSTEM AND METHOD FOR USING A SEQUENCER IN A CONCURRENT PRIORITY QUEUE”, application Ser. No. 13/781,493, filed Feb. 28, 2013, by inventor Oleksandr Otenko.
The present invention is generally related to computer systems and software such as middleware, and is particularly related to systems and methods for supporting queue in a middleware machine environment.
Within any large organization, over the span of many years, the organization often finds itself with a sprawling IT infrastructure that encompasses a variety of different computer hardware, operating-systems, and application software. Although each individual component of such infrastructure might itself be well-engineered and well-maintained, when attempts are made to interconnect such components, or to share common resources, it is often a difficult administrative task. In recent years, organizations have turned their attention to technologies such as virtualization and centralized storage, and even more recently cloud computing, which can provide the basis for a shared infrastructure. However, there are few all-in-one platforms that are particularly suited for use in such environments. These are the general areas that embodiments of the invention are intended to address.
Systems and methods are provided for using continuation-passing to transform a queue from non-blocking to blocking. The non-blocking queue can maintain one or more idle workers in a thread pool that is not accessible from outside of the non-blocking queue. The continuation-passing can eliminate one or more serialization points in the non-blocking queue, and allows a caller to manage the one or more idle workers in the thread pool from outside of the non-blocking queue.
Other objects and advantages of the present invention will become apparent to those skilled in the art from the following detailed description of the various embodiments, when read in light of the accompanying drawings.
Described herein are systems and methods that can use continuation passing to transfer a queue from non-blocking to blocking in a middleware machine environment.
Priority Queue
In accordance with various embodiments of the invention, a concurrent system can use a priority queue to prioritize incoming requests in order to provide service with an appropriate service level agreement (SLA).
The priority queue 301 can be designed to meet demanding concurrency criteria, so that the interaction between the contenders does not cause degradation in the throughput of the system as a whole. Additionally, the priority queue 301 can be implemented to have a fixed memory footprint, so that the JVM is able to better optimize its operations on fixed-size arrays of primitives, and can achieve substantial cache efficiency.
In accordance with various embodiments of the invention, the priority queue 301 can be implemented based on a calendar queue, e.g. the calendar queue provided in the WebLogic Application Server. The calendar queue can include a calendar with multiple buckets, each of which can store events that fall within a particular slice of time. For example, the multiple buckets can be sorted and arranged by comparing the target service time with a current time. If the difference in time is in the first byte, then the request can be stored in a bucket in the first 256 buckets. The specific bucket can be chosen using the actual value of the target time for executing the request. Furthermore, if the difference in time is in the second byte, then the request can be stored in a bucket in the second 256 buckets.
When a consumer, e.g. via one of the worker threads A-C 321-323, tries to remove the next request that is configured to be execute the earliest, the system can scan the calendar for the first bucket that is not empty. If this bucket is not one of the first 256 buckets, then the calendar queue can use a loop and promote method to move the requests to the buckets “one level down” toward the first 256 buckets. Eventually, some requests can be promoted to one or more buckets in the first 256 buckets, and the consumer can claim a request and proceed accordingly.
The above promotion process can involve logarithmic cost, which may have an impact on the overall performance of the system. Additionally, there can be other designs for the calendar queue, the performance of which may be limited to a choice between “O(1) add, O(log N) delete_min,” and “O(log N) add, O(1) delete_min.”
The request manager 402, which manages a thread pool 403, can have a separate logic for associating different threads with different requests. For example, the request manager 402 can serialize all thread pool method calls by wrapping the calls to the priority queue 401 in a synchronized statement, or a synchronized block 410, using a lock mechanism.
Thus, the operations on the priority queue 401 may be limited by the single-threaded design since the serialization is done outside the non-blocking priority queue 401.
Concurrent Priority Queue
The concurrent priority queue 501 can include a calendar, e.g. a calendar ring 502, which is capable of prioritizing and storing incoming requests. The calendar ring 502, the size of which is limited, can be configured to store requests that have a target response time within a preconfigured time limit. Within the calendar ring 502, a request can be stored, or placed, directly in the ring buffer at a position that matches Quality of Service (QoS) of the request, e.g. the target service time.
Thus, the system can achieve a much cheaper lookup for requests without changing the memory footprint of a calendar queue. Furthermore, the system can reduce the logarithmic complexity of the delete_min operation of the calendar queue to mostly a linear cache efficient search, while keeping the adding of elements to the calendar queue as O(1) operations.
Additionally, a request with a target service time higher than the preconfigured time limit can be added to a list of outliers, e.g. the outlier list 504. Since the scheduling of these requests may not be time critical, the system permits the slower addition to a sorted list of outliers 504. Furthermore, the concurrent priority queue 501 can use a sequencer, e.g. outliers_seq, to enforce a first-in-first-out (FIFO) order for the outlier list with the same QoS.
For example, the calendar ring 502 can be configured to store requests with a target response time (or QoS) below 2 seconds, since the requests with the QoS higher than 2 seconds can be considered rare. Furthermore, the requests with the QoS below 2 seconds can be placed in the calendar ring 502 that matches QoS, while the requests with the QoS higher than 2 seconds can be placed into the list of outliers 504.
Unlike the calendar queue as shown in
Using continuation-passing, the system can transform the calendar queue 501 from non-blocking to blocking. The continuation-passing 507 can enable the consumers A-C 511-513 to manage the idle workers, or Threads 530, in the thread pool 520, so that the threads 530, which may be waiting in the thread pool 520, can be reused.
Additionally, the concurrent priority queue 501 can include a sequencer 503 that enables the concurrent priority queue 501 to detect contention and can use a fast lane 505 to support cooperative concurrency. Thus, the concurrent priority queue 501 can be aware of and handle the contention properly, without a need for the locks to expose knowledge about contention.
Continuation Passing
Continuation-passing can be used to eliminate serialization points in the non-blocking queue 601. As shown in
The caller A 611 can use the callable object 631 to manage one or more idle workers, e.g. threads A-C 621-623 in the thread pool 602.
For example, there can be two different exemplary callable objects, AND_THEN and OR_ELSE, each of which can be used to manage the thread pool 602.
The callable object AND_THEN can be used to poll a thread, or idle worker, from the thread pool.
The callable object OR_ELSE can be used to park a thread in the thread pool.
Furthermore, the non-blocking queue 601 can use a synchronization block 603 to control the operations on the threads, or idle workers, in the thread pool 602, so that there can be no conflict or miss with thread management. The synchronization block 603 can be implemented within the scope of the non-blocking queue to ensure that multiple calls can be made by different callers concurrently.
In accordance with an embodiment of the invention, the non-blocking queue 601 can provide a queue interface 605, which enables a caller, e.g. callers A-C 611-613, to pass a function, or a continuation 604, that can be called when the queue 601 is empty. For example, the contract can be that the call of the function passed to add( ) is serialized with respect to the call of the function passed to delete_min( ). Even though a serialization point is still present, it may only have impact when the queue is empty, or nearly empty. This design enables elimination of serialization in cases when the queue is not empty, which is the most important performance case. If the callers of add( ) and delete_min( ) can detect that the queue is not empty, they may not need to call the continuations and they will not need to be serialized by the queue.
Furthermore, the queue 601 can return whatever is returned by those functions, so that the callers A-C 611-613 can detect that the functions are called. Thus, there can be no need for the callers A-C 611-613 to have the synchronization wrapper from outside.
Using continuation-passing, the non-blocking queue 601 can be transformed into a blocking queue that can interact with a plurality of callers concurrently. The system enables the caller to manage the idle workers in the pool and reuse the threads waiting in the thread pool of the blocking queue.
Activating an Idle Worker in the Thread Pool
For example, the caller 811 can use a begin( ) method provided by a request manager 810 to add the request A 811 into the non-blocking queue 801.
In the begin( ) method, the request manager 810 can add the request A 831 into the non-blocking queue 801 via a function call of add( ).
When the queue 801 is empty, the request manager allows the caller to pass the request A 811, r, and a callable object, AND_THEN, in the function call. The add( ) method can return an idle worker, e.g. thread A 821, to the request manager 810. When the request manager 810 receives a reference to the idle worker A 821, the request manager 810 can proceed to activate the idle worker 821, e.g. via unparking the thread A 821 associated with the idle worker 821.
In the example as shown in
Place a Thread in the Waiter List
For example, the caller A 1011 can call an end( ) function provided by a request manager 1010 to release the thread A 1031.
Then, the request manager 1010 can place a function call of delete_min( ), which can either claim a request from the queue 1001 or place an thread back to the thread pool 1002, depending on whether the queue 1001 is empty.
As shown in
In the example as shown in
The present invention may be conveniently implemented using one or more conventional general purpose or specialized digital computer, computing device, machine, or microprocessor, including one or more processors, memory and/or computer readable storage media programmed according to the teachings of the present disclosure. Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those skilled in the software art.
In some embodiments, the present invention includes a computer program product which is a storage medium or computer readable medium (media) having instructions stored thereon/in which can be used to program a computer to perform any of the processes of the present invention. The storage medium can include, but is not limited to, any type of disk including floppy disks, optical discs, DVD, CD-ROMs, microdrive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data
The foregoing description of the present invention has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations will be apparent to the practitioner skilled in the art. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, thereby enabling others skilled in the art to understand the invention for various embodiments and with various modifications that are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the following claims and their equivalence.
Number | Name | Date | Kind |
---|---|---|---|
5109384 | Tseung | Apr 1992 | A |
6449614 | Marcotte | Sep 2002 | B1 |
6874144 | Kush | Mar 2005 | B1 |
6895590 | Yadav | May 2005 | B2 |
6938085 | Belkin et al. | Aug 2005 | B1 |
7046676 | Goetzinger et al. | May 2006 | B2 |
7554993 | Modi et al. | Jun 2009 | B2 |
7685391 | Cholleti et al. | Mar 2010 | B1 |
7761617 | Seigneret et al. | Jul 2010 | B2 |
7876677 | Cheshire | Jan 2011 | B2 |
7991904 | Melnyk et al. | Aug 2011 | B2 |
8130776 | Sundararajan | Mar 2012 | B1 |
8131860 | Wong et al. | Mar 2012 | B1 |
8255914 | Joyce et al. | Aug 2012 | B1 |
8347302 | Vincent et al. | Jan 2013 | B1 |
8504691 | Tobler et al. | Aug 2013 | B1 |
8539486 | Cain et al. | Sep 2013 | B2 |
8578033 | Mallart | Nov 2013 | B2 |
8850441 | Allen | Sep 2014 | B2 |
8863136 | Allen | Oct 2014 | B2 |
8918791 | Chudgar et al. | Dec 2014 | B1 |
8930584 | Otenko et al. | Jan 2015 | B2 |
20010034753 | Hildebrand | Oct 2001 | A1 |
20020114338 | Craig | Aug 2002 | A1 |
20020143847 | Smith | Oct 2002 | A1 |
20020174136 | Cameron et al. | Nov 2002 | A1 |
20030014480 | Pullara et al. | Jan 2003 | A1 |
20030053469 | Wentink | Mar 2003 | A1 |
20030078958 | Pace et al. | Apr 2003 | A1 |
20030081544 | Goetzinger et al. | May 2003 | A1 |
20030110232 | Chen | Jun 2003 | A1 |
20030120822 | Langrind et al. | Jun 2003 | A1 |
20040154020 | Chen | Aug 2004 | A1 |
20040177126 | Maine | Sep 2004 | A1 |
20040205771 | Sudarshan et al. | Oct 2004 | A1 |
20050021354 | Brendle et al. | Jan 2005 | A1 |
20050038801 | Colrain et al. | Feb 2005 | A1 |
20050094577 | Ashwood-Smith | May 2005 | A1 |
20050102412 | Hirsimaki | May 2005 | A1 |
20050262215 | Kirov et al. | Nov 2005 | A1 |
20050283577 | Sivaram et al. | Dec 2005 | A1 |
20060015600 | Piper | Jan 2006 | A1 |
20060015700 | Burka | Jan 2006 | A1 |
20060031846 | Jacobs et al. | Feb 2006 | A1 |
20060143525 | Kilian | Jun 2006 | A1 |
20060176884 | Fair | Aug 2006 | A1 |
20060209899 | Cucchi et al. | Sep 2006 | A1 |
20060230411 | Richter et al. | Oct 2006 | A1 |
20060294417 | Awasthi et al. | Dec 2006 | A1 |
20070118601 | Pacheco | May 2007 | A1 |
20070156869 | Galchev et al. | Jul 2007 | A1 |
20070198684 | Mizushima | Aug 2007 | A1 |
20070203944 | Batra et al. | Aug 2007 | A1 |
20070263650 | Subramania et al. | Nov 2007 | A1 |
20080044141 | Willis et al. | Feb 2008 | A1 |
20080098458 | Smith | Apr 2008 | A2 |
20080140844 | Halpern | Jun 2008 | A1 |
20080286741 | Call | Nov 2008 | A1 |
20090034537 | Colrain et al. | Feb 2009 | A1 |
20090150647 | Mejdrich et al. | Jun 2009 | A1 |
20090172636 | Griffith | Jul 2009 | A1 |
20090182642 | Sundaresan | Jul 2009 | A1 |
20090327471 | Astete et al. | Dec 2009 | A1 |
20100082855 | Accapadi et al. | Apr 2010 | A1 |
20100100889 | Labrie et al. | Apr 2010 | A1 |
20100198920 | Wong et al. | Aug 2010 | A1 |
20100199259 | Quinn | Aug 2010 | A1 |
20100278190 | Yip et al. | Nov 2010 | A1 |
20110029812 | Lu et al. | Feb 2011 | A1 |
20110055510 | Fritz et al. | Mar 2011 | A1 |
20110071981 | Ghosh et al. | Mar 2011 | A1 |
20110119673 | Bloch et al. | May 2011 | A1 |
20110153992 | Srinivas et al. | Jun 2011 | A1 |
20110161457 | Sentinelli | Jun 2011 | A1 |
20110231702 | Allen et al. | Sep 2011 | A1 |
20120023557 | Bevan | Jan 2012 | A1 |
20120054472 | Altman et al. | Mar 2012 | A1 |
20120066400 | Reynolds | Mar 2012 | A1 |
20120066460 | Bihani | Mar 2012 | A1 |
20120158684 | Lowenstein et al. | Jun 2012 | A1 |
20120218891 | Sundararajan | Aug 2012 | A1 |
20120239730 | Revanuru | Sep 2012 | A1 |
20130004002 | Duchscher | Jan 2013 | A1 |
20130132970 | Miyoshi | May 2013 | A1 |
20130145373 | Noro | Jun 2013 | A1 |
20130304848 | Lyle et al. | Nov 2013 | A1 |
Number | Date | Country |
---|---|---|
10290251 | Jan 2013 | CN |
2005128952 | May 2005 | JP |
201229897 | Jul 2012 | TW |
2012084835 | Jun 2012 | WO |
Entry |
---|
European Patent Office, International Searching Authority, International Search Report and Written Opinion dated Mar. 6, 2014 for International Application No. PCT/US2013/067106, 11 pages. |
Baldwin, Richard G., “The ByteBuffer Class in Java”, Aug. 20, 2012, 14 pages. Retrieved from : <http://www.developer.com/author/Richard-G.-Baldwin-64720.htm>. |
European Patent Office, International Searching Authority, International Search Report and Written Opinion dated Mar. 14, 2014 for International Application No. PCT/US2013/067108, 12 pages. |
Office Action issued by United States Patent and Trademark Office for U.S. Appl. No. 14/167,792, dated May 12, 2016, 9 pages. |
Takeshi Motohashi, “An Activity-Based Parallel Execution Mechanism Using Distributed Activity Queues”, Journal of Information Processing, Japan, Information Processing Society of Japan, Oct. 15, 1994, vol. 35, No. 10, pp. 2128-2137, 10 pages. |
SIPO Search Report for Chinese Application No. 201380060771.3, dated Sep. 5, 2017, 10 pages. |
SIPO Search Report for Chinese Application No. 201380060771.3, dated Sep. 13, 2017, 11 pages. |
Office Action dated Sep. 26, 2017 for Japanese Patent Application No. 2015-560169, 4 pages. |
Number | Date | Country | |
---|---|---|---|
20140245309 A1 | Aug 2014 | US |