A database is a collection of logically related data arranged in a predetermined format, such as in tables that contain rows and columns. To access the content of a table in the database, queries according to a standard database query language (such as the Structured Query Language or SQL) are submitted to the database. A query can also be issued to insert new entries into a table of a database (such as to insert a row into the table), modify the content of the table, or to delete entries from the table. Examples of SQL statements include INSERT, SELECT, UPDATE, and DELETE.
As database systems have increased in size and complexity, it has become more challenging to efficiently implement operational and management tasks in the database systems.
In general, requests to be executed in the database system are received, where a plurality of the requests are provided in a queue for later execution. Priority indicators are calculated for assignment to corresponding ones of the plurality of requests in the queue, where the priority indicators are calculated based on delay times and predefined priority levels of the requests. The requests in the queue are executed in order according to the calculated priority indicators.
Other or alternative features will become apparent from the following description, from the drawings, and from the claims.
Some embodiments are described with respect to the following figures:
A workload management subsystem according to some embodiments provides for more effective concurrency control of request execution in a database management system (or more simply “database system”). The workload management subsystem includes a regulator and a dynamic queuing mechanism. The regulator is able to regulate execution of the requests such that respective performance goals of the requests can be achieved, while the dynamic queuing mechanism is able to determine whether any given request can be immediately scheduled for execution or whether the given request should be held for later execution.
The term “request” or “database request” can refer to a database query (e.g., Structured Query Language or SQL query) that is processed by the database system to produce an output result. Alternatively, a “request” or “database request” can refer to a utility, such as a load utility to perform loading of data from a source to a target. More generally, a “request” or “database request” refers to any command or group of commands that can be submitted to the database system for performing predefined data access (read or write) tasks, or to perform creation or modifications of database structures such as tables, views, etc. A request can belong to one of multiple possible workloads in the database system.
A “workload” (or alternatively “workload group”) is a set of requests that have common characteristics, such as an application that issued the requests, a source of the requests, type of query, priority, response time goals, throughput, and so forth. A workload group is defined by a workload definition, which defines characteristics of the workload group as well as various rules associated with the workload group. A “multi-class workload” is an environment with more than one workload.
The workload groups may be divided into workload groups of different priorities. A low priority workload group may include low priority requests such as background load requests or reporting requests. Another type of workload group includes requests that have short durations but high priorities. Yet another type of workload group includes continuous or batch requests, which run for a relatively long time.
In some database systems, concurrency limits can be set to limit the number of concurrent requests that are executing inside the database systems. Limiting the number of concurrent requests that can execute in a database system based on concurrency limits is referred to as throttling. In performing throttling, different concurrency limits can be specified for respective different workload groups. Thus, the number of concurrent requests for each workload group can be monitored, and if an incoming request (of a particular workload group) would cause a respective concurrency limit to be exceeded, then the incoming request can be provided to a delay queue for later execution. Providing or storing a request in the delay queue refers to storing either the entire request in the delay queue, a portion of the request in the delay queue, or a representation of the request in the delay queue.
There are various issues associated with using throttling techniques based on concurrency limits for different workload groups. The different concurrency limits may have to be set by a database administrator, which can be difficult to accurately set without a lot of trial and error, and continual manual monitoring of database system operations. Moreover, requests provided in a delay queue typically use a simple first-in-first-out (FIFO) scheduling technique to determine which delayed request is to be executed next. Although the FIFO scheduling technique promotes a basic fairness principle, strict adherence to the FIFO scheduling technique can cause certain requests in the delay queue to violate respective service level goals (SLGs) of the requests.
An “SLG” or “service level goal” refers to a predefined set of one or more performance criteria that are to be satisfied during execution of a request. The SLG can be defined by a database administrator, for example. In some examples, an SLG can be any one or more of the following: a target response time; a target throughput; an enforcement policy (to specify that some percentage of queries are to finish in some predefined amount of time), and so forth. In a more specific example, the SLG for requests of a particular workload group can be “≦1 second @ 95,” which means that each such request should execute within one second 95% of the time. Another example SLG can be “1,000 queries per hour.”
An SLG can be defined for a request individually. Alternatively, an SLG can be defined for a workload group that has a number of requests.
A further limitation of typical delay queue scheduling techniques is that they fail to consider what “like” requests can be scheduled together to enhance resource sharing. For example, two requests in the delay queue can substantially share resources of a database system during execution, where one of the two requests is the next request scheduled for execution, whereas the second of the two requests can be farther back in the delay queue. If the second request is not executed concurrently with the first request, then an opportunity for sharing of database system resources is lost. For enhanced efficiency, it may be beneficial to schedule these two requests together for execution so that substantial resource sharing can be achieved.
In accordance with some embodiments, a dynamic queuing mechanism is provided to more efficiently manage execution of requests provided in a delay queue of the workload management subsystem. An example of a workload management subsystem is depicted in
In accordance with some embodiments, rather than use a simple FIFO scheduling technique for determining which request in the delay queue 153 to next execute, the selection of the next request in the delay queue 153 for execution is based on priority indicators calculated for respective requests in the delay queue 153. The priority indicators calculated for the requests in the delay queue 153 can be based on delay times and predefined priority levels set for respective requests in the delay queue 153.
A “delay time” of a request in the delay queue 153 includes the amount of time that the respective request has spent in the delay queue 153 waiting for execution. The predefined priority level for a particular request refers to some pre-assigned priority level of the particular request. For example, the predefined priority level can be based on the SLG of the request (if the request has an SLG). Alternatively, or additionally, the predefined priority level can be a user-assigned “pecking order,” which can be assigned by a user to specify an order of priority for the particular request. In some implementations, a pecking order and/or SLG can be provided for a first class of requests, referred to as preemptive requests in some examples (discussed further below).
For a second class of requests, referred to as timeshare requests in some examples, the predefined priority level can be a deterministic value that is assigned (either automatically or manually) to a particular request of the second class to control a respective share of database system resources that can be used by the particular request during execution. A timeshare request is a request that shares database system resources with other timeshare requests. This second class of timeshare requests are not associated with SLGs. The database system resources that can be shared by timeshare requests are those database system resources that are not used by preemptive requests. Generally, preemptive requests have higher priority than timeshare requests such that the timeshare requests will not receive any resources that are needed by preemptive requests. Stated differently, preemptive requests will preempt resources away from timeshare requests if necessary to meet the SLGs of the preemptive requests.
The distinction between preemptive requests and timeshare requests allows for more flexible management of requests executing in the database system 100. Not all requests require an SLG. Rather than using SLGs to manage execution of timeshare requests, the deterministic values mentioned above can be assigned to timeshare requests to control relative sharing of database system resources by the timeshare requests. The deterministic values indicate the relative importance of the corresponding timeshare requests, and thus the relative amounts of database system resources that can be used by the respective timeshare requests in a shared manner.
In some implementations, the delay queues 153 depicted in
In alternative implementations, the same delay queue can be used for both preemptive and timeshare requests. Although reference is made to preemptive requests and timeshare requests in the discussion herein, it is noted that techniques and mechanisms according to some embodiments can be applied to other classes of requests.
In some implementations, the database system 100 can include multiple computer nodes 105 (just one node depicted in
Each processing module 110 manages a portion of a database that is stored in a corresponding one of the data storage facilities 120. Each data storage facility 120 includes one or more disk drives or other types of storage devices. The nodes 105 of the database system are interconnected by the network 115.
As depicted in
The node 105 also includes the parsing engine 130, which has a parser 132 and a dispatcher 134. The parser 132 receives database requests (such as those submitted by client systems 140 over a network 142) or from another source, parses each received request, and generates executable steps for the parsed query. The parser 132 includes an optimizer 136 that generates query plans (also referred to as execution plans) in response to a request, selecting the most efficient from among the plural query plans. The optimizer 136 can also produce resource estimates (e.g., time estimates or estimates of usage of various database system resources) for the query plan.
The dispatcher 134 sends the executable steps of the query plan generated by the parser 132 to one or multiple processing modules 110 in the node 105. The processing modules 110 execute the steps. If the request specifies retrieval of data from the table 125, then the retrieved data is sent back by the database system 100 to the querying client system 140 for storage or display at the client system 140 (which can be a computer, personal digital assistant, etc.). Alternatively, the request can specify a modification of the table (adding data, changing data, and/or deleting data in the table).
The dispatcher 134 includes the workload management subsystem 138 according to some embodiments. Note that parts of the workload management subsystem 138 can also be in the processing modules 110 (not depicted), since the workload management subsystem 138 also monitors execution of requests, as discussed below.
In embodiments with multiple parsing engines 130, each parsing engine can have a corresponding parser and/or workload management subsystem.
Operation of the optimizer 136 and workload management subsystem 138 is illustrated in more detail in
As shown in
The estimate of usage of the processor resource can indicate the expected number of cycles of one or more CPUs that execution of a request is expected to consume. The estimate of usage of the I/O resource can indicate the expected number of I/O accesses (e.g., read or write accesses of disk storage, for example) that execution of the request is expected to invoke. The estimate of usage of the network resource can indicate an amount of network traffic (such as traffic between different computer nodes) that is expected in the execution of the request. The estimate of usage of memory can indicate am amount of memory to be used for storing data.
The optimizer 136 can also provide cardinality estimates. A cardinality estimate refers to an estimate of a size (e.g., number of rows) of a base table or a result table (that contains results of a database operation). Another resource estimate that can be provided by the optimizer is a spool size estimate regarding an estimated size of a spool, which is an intermediate table to store intermediate results during database operations.
The optimizer 136 can produce the estimates of processor usage, I/O usage, network usage, and memory usage based on a cost model. For example, the optimizer 136 can retrieve information relating to the processor capacity, which can be expressed in terms of millions of instructions per second (MIPS). Also, the optimizer 136, as part of its normal optimization tasks, can estimate the cardinalities of tables and intermediate spool files that are involved in execution of the request. Based on the estimated cardinalities and the processor capacity, the optimizer 136 is able to estimate the processor usage that is expected for execution of the request. The processor usage estimate can be performed on a per-step basis for each step of the query plan. Note that different steps can access different tables or different parts of tables across different access modules in the system.
Similarly, the optimizer 136 can also retrieve information regarding memory size (size of high-speed memory that can be used to temporarily store data). Based on the memory size and the expected accesses of data in base tables and intermediate tables that will be involved in each step of a query plan, the optimizer 136 is able to estimate the expected I/O usage for each step. The I/O usage includes I/O accesses of disk storage (e.g., the number of block I/Os to read from or write to a table or index).
Moreover, the optimizer 136 is able to determine which data-storage facilities 120 store data involved in the execution of the request. For each step of the query plan, the optimizer 136 is able to estimate how much inter-processor module or inter-node traffic is expected—this will allow the optimizer 136 to estimate the network usage (usage of the network 115 of
Based on the resource estimates (response time estimate and/or processor usage, I/O usage, network usage, memory usage, table cardinality and/or spool size estimates), and/or based on other classification criteria for a respective workload, the regulator subsystem 138 assigns (at 204) the request to one of the multiple workload groups that have been defined. The assignment is based on accessing workload group rules 205 (as defined by workload definitions) to match characteristics of the request as identified by the optimizer 136 with various workload definition rules. The workload group corresponding to the workload definition rules most closely matching the characteristics of the request is identified, where the incoming request is assigned to the identified workload group.
Next, the regulator 150 of the workload management subsystem 138 performs request scheduling (at 206), where the regulator 150 determines whether or not an incoming request is to be immediately scheduled for execution or whether the incoming request should be held for later execution. In some examples, as part of the request scheduling performed at 206, the regulator 150 can also consider concurrency limits—the maximum number of concurrent executing requests from each workload group. The regulator 150 monitors the concurrency limits of the workload groups, and if the concurrency limit of the corresponding workload group (that the incoming request is assigned to) is above a predefined threshold (which means that there are too many concurrent executing requests for this workload group), then the incoming request for that workload group waits in a delay queue 153 for later execution. In other example implementations, concurrency limits can be omitted.
The request scheduling 206 depicted in
A request that is scheduled for execution (either a request that can be scheduled for immediate execution or a request that has been retrieved from a delay queue 153) is placed (at 208) by the regulator 150 into one of multiple workload group buckets 210 (as defined by corresponding workload definitions). The “buckets” 210 can be execution queues that contain requests scheduled for execution.
Next, the regulator 150 performs SLG-responsive regulation (at 212) at the request level. The regulator 150 selects a request from one of the buckets 210, in an order determined by priorities associated with the workload groups, and executes the selected request.
In accordance with some implementations, the SLG-responsive regulation task 212 performed by the regulator 150 includes adjusting priority settings for an individual request to allow a request to meet its respective SLG. In other implementations, the SLG-responsive regulation task 212 is also able to recalibrate resource estimates. Initial estimates are provided by the optimizer 136 as part of its optimization tasks. During execution of a request, the regulator 150 can determine that the resource estimates from the optimizer 136 are no longer accurate, in which case the regulator 150 is able to adjust the resource estimates based on the monitored progress of the execution of the request.
The resource estimates can be adjusted (on a continual basis) during execution of various steps of an execution plan corresponding to the request.
As depicted in
The SLG-responsive resource monitor 216 is able to consider, at each step of the execution plan associated with the request, whether the progress information for execution of the request so far that is received from the SLG-responsive regulation task 212 is consistent with the current resource estimates provided for the respective steps of the query plan. The progress information 215 can indicate whether or not the current resource estimates are inadequate (actual usage exceeds estimated resource usage) or excessive (actual usage less than estimated resource usage). If recalibration of resource estimates is needed based on comparing the progress information 215 to the current resource estimates, recalibration of the resource estimates can be performed.
Based on the re-calibrated resource estimates, the SLG-responsive resource monitor 216 can provide priority adjustments (218) to the SLG-responsive regulation task 212. In response to the priority adjustments (218), the SLG-responsive regulation task 212 adjusts priority settings accordingly to adjust priority settings for the remaining steps of the execution plan.
As noted above, requests in a delay queue 153 are sorted according to calculated priority indicators. The priority indicators can be calculated based on delay times and predefined priority levels for respective requests in the delay queue 153. The calculation of the priority indicators for requests in the delay queue 153 is explained in the context of
In the example of
In some embodiments, the calculation of priority indicators for preemptive requests is different from the calculation of priority indicators for timeshare requests.
In some implementations, the priority indicator (represented as “Query Priority” in the example equations below) calculated for each preemptive request (in the delay queue 153A, for example) can be as follows:
Urgency Factor=(Delay Time+Estimated Execution Time/Service Level Goal),
Query Priority=Delay Time*Urgency Factor/Pecking Order,
where:
A higher value of Query Priority indicates a higher priority. In other examples, if Pecking Order is provided as a dividend rather than a divisor, then a lower value of Query Priority would indicate a higher priority.
Thus, as depicted above, the calculation of a priority indicator for a preemptive request in the delay queue 153A is based on the delay time associated with the request in the delay queue 153A, as well as based on SLG and pecking order information, which can be individually or collectively considered a predefined priority level. By calculating the priority indicator for a request in the delay queue 153A that takes into account a predefined priority level as well as the delay time of the request, the scheduling of requests in the delay queue 153A for execution is more likely to result in the request being able to meet their respective SLGs.
The priority indicators (“Query Priority”) for timeshare requests (in the delay queue 153B, for example) can be calculated according to the following:
Query Priority=Delay Time/DetLev,
where DetLev represents the deterministic level (e.g., D1-D4 in
The following provides more specific examples of calculating priority indicators for preemptive requests. Assuming there are five preemptive requests (Delayed Request #1, Delayed Request #2, Delayed Request #3, Delayed Request #4, Delayed Request #5) in the delay queue 153A, the following provides an example of calculation of priority indicators for these five requests:
Delay Time=50,000
Estimated Execution Time=20,000
SLG=100,000
Urgency Factor=0.70
Pecking Order=1
Query Priority=(0.70*50,000=35,000/1)=35,000.
Delay Time=80,000
Estimated Execution Time=40,000
SLG=100,000
Urgency Factor=1.2
Pecking Order=1
Query Priority=(1.2*80,000=96,000/1)=96,000.
Delay Time=5,000
Estimated Execution Time=5,000
SLG=100,000
Urgency Factor=0.1
Pecking Order=1
Query Priority=(0.1*5,000=500/1)=500.
Delay Time=5,000
Estimated Execution Time=10,000
SLG=40,000
Urgency Factor=0.38
Pecking Order=3
Query Priority=(0.38*5,000=1,875.00/3)=625.
Delay Time=90,000
Estimated Execution Time=40,000
SLG=100,000
Urgency Factor=1.3
Pecking Order=3
Query Priority=(1.0*90,000=117,000/3)=39,000.
Based on the foregoing priority indicators calculated for the five requests, the order of priority of the delayed requests is as follows (from highest priority to lowest priority): Delayed Request #2, Delayed Request #5, Delayed Request #1, Delayed Request #4, and Delayed Request #3. This is the order in which the five requests will be extracted from the delay queue 153A for execution.
In addition to providing more intelligent prioritization of requests in one or more delay queues (e.g., 153A and 153B in
Note that the soft throttling mechanism does not change the priority or ordering of the requests within the delay queue. The soft throttling mechanism merely specifies that if the next request taken from the delay queue (the one with the highest priority) shares usage of resources with another lower priority request in the delay queue by greater than some predefined threshold, then the lower priority request in the delay queue can also be selected for execution from the delay queue.
By using the workload management subsystem 138 according to some embodiments that provides for more intelligent prioritization and scheduling of requests in a delay queue, database system operations can be enhanced since it is more likely that requests in the delay queue will be able to meet their respective SLGs. Also, sharing of system resources can be enhanced.
Instructions of machine-readable instructions, such as the workload management subsystem 138 and other modules depicted in
Data and instructions are stored in respective storage devices, which are implemented as one or more computer-readable or machine-readable storage media. The storage media include different forms of memory including semiconductor memory devices such as dynamic or static random access memories (DRAMs or SRAMs), erasable and programmable read-only memories (EPROMs), electrically erasable and programmable read-only memories (EEPROMs) and flash memories; magnetic disks such as fixed, floppy and removable disks; other magnetic media including tape; optical media such as compact disks (CDs) or digital video disks (DVDs); or other types of storage devices. Note that the instructions discussed above can be provided on one computer-readable or machine-readable storage medium, or alternatively, can be provided on multiple computer-readable or machine-readable storage media distributed in a large system having possibly plural nodes. Such computer-readable or machine-readable storage medium or media is (are) considered to be part of an article (or article of manufacture). An article or article of manufacture can refer to any manufactured single component or multiple components. The storage medium or media can be located either in the machine running the machine-readable instructions, or located at a remote site from which machine-readable instructions can be downloaded over a network for execution.
In the foregoing description, numerous details are set forth to provide an understanding of the subject disclosed herein. However, implementations may be practiced without some or all of these details. Other implementations may include modifications and variations from the details discussed above. It is intended that the appended claims cover such modifications and variations.
Number | Name | Date | Kind |
---|---|---|---|
3648253 | Mullery et al. | Mar 1972 | A |
5301317 | Lohman et al. | Apr 1994 | A |
5473773 | Aman | Dec 1995 | A |
5504894 | Ferguson et al. | Apr 1996 | A |
5537542 | Eilert | Jul 1996 | A |
5675739 | Eilert et al. | Oct 1997 | A |
5675797 | Chung | Oct 1997 | A |
5692182 | Desai et al. | Nov 1997 | A |
6052694 | Bromberg | Apr 2000 | A |
6330552 | Farrar et al. | Dec 2001 | B1 |
6353818 | Carino, Jr. | Mar 2002 | B1 |
6584488 | Brenner et al. | Jun 2003 | B1 |
6718358 | Bigus | Apr 2004 | B1 |
6779182 | Zolnowsky | Aug 2004 | B1 |
6895585 | Smith | May 2005 | B2 |
6950848 | Yousefi'zadeh | Sep 2005 | B1 |
6957433 | Umberger et al. | Oct 2005 | B2 |
6990667 | Ulrich et al. | Jan 2006 | B2 |
7124146 | Rjaibi et al. | Oct 2006 | B2 |
7146353 | Garg | Dec 2006 | B2 |
7228546 | McCarthy | Jun 2007 | B1 |
7360218 | Accapadi et al. | Apr 2008 | B2 |
7379953 | Luo et al. | May 2008 | B2 |
7395537 | Brown | Jul 2008 | B1 |
7499908 | Elnaffar et al. | Mar 2009 | B2 |
7657501 | Brown | Feb 2010 | B1 |
7774312 | Ngai et al. | Aug 2010 | B2 |
7802255 | Pilkington | Sep 2010 | B2 |
7805727 | Anderson et al. | Sep 2010 | B2 |
7818745 | Snyder | Oct 2010 | B2 |
7831592 | Markl et al. | Nov 2010 | B2 |
7853584 | Barsness et al. | Dec 2010 | B2 |
8032885 | Fish | Oct 2011 | B2 |
8151269 | Brown et al. | Apr 2012 | B1 |
8261283 | Tsafrir et al. | Sep 2012 | B2 |
8332857 | Brown et al. | Dec 2012 | B1 |
8516488 | Brown et al. | Aug 2013 | B1 |
8539493 | Robertson et al. | Sep 2013 | B1 |
8627319 | Xu | Jan 2014 | B1 |
8645425 | Brown et al. | Feb 2014 | B1 |
20010039559 | Eilert et al. | Nov 2001 | A1 |
20020091746 | Umberger et al. | Jul 2002 | A1 |
20020143847 | Smith | Oct 2002 | A1 |
20020184240 | Volkoff et al. | Dec 2002 | A1 |
20030002649 | Hettish | Jan 2003 | A1 |
20030005028 | Dritschler | Jan 2003 | A1 |
20030051026 | Carter et al. | Mar 2003 | A1 |
20030097443 | Gillett et al. | May 2003 | A1 |
20030233391 | Crawford, Jr. | Dec 2003 | A1 |
20040021678 | Ullah | Feb 2004 | A1 |
20040158831 | Amano et al. | Aug 2004 | A1 |
20040205206 | Naik et al. | Oct 2004 | A1 |
20040225631 | Elnaffar | Nov 2004 | A1 |
20040236757 | Caccavale | Nov 2004 | A1 |
20040243692 | Arnold | Dec 2004 | A1 |
20050038789 | Chidambaran | Feb 2005 | A1 |
20050038833 | Colrain | Feb 2005 | A1 |
20050039183 | Romero | Feb 2005 | A1 |
20050066326 | Herbeck | Mar 2005 | A1 |
20050071307 | Snyder | Mar 2005 | A1 |
20050081210 | Day et al. | Apr 2005 | A1 |
20050108717 | Hong et al. | May 2005 | A1 |
20050125213 | Chen | Jun 2005 | A1 |
20050192937 | Barsness et al. | Sep 2005 | A1 |
20050262183 | Colrain | Nov 2005 | A1 |
20050278381 | Diao et al. | Dec 2005 | A1 |
20050283783 | DeSota | Dec 2005 | A1 |
20060026179 | Brown | Feb 2006 | A1 |
20060080285 | Chowdhuri | Apr 2006 | A1 |
20060149695 | Bossman et al. | Jul 2006 | A1 |
20070100793 | Brown | May 2007 | A1 |
20070162426 | Brown | Jul 2007 | A1 |
20070271570 | Brown et al. | Nov 2007 | A1 |
20080010240 | Zait | Jan 2008 | A1 |
20080052720 | Barsness et al. | Feb 2008 | A1 |
20080071759 | Santosuosso | Mar 2008 | A1 |
20080133447 | Barsness et al. | Jun 2008 | A1 |
20080133454 | Markl et al. | Jun 2008 | A1 |
20080162417 | Morris | Jul 2008 | A1 |
20080162418 | Morris | Jul 2008 | A1 |
20080162419 | Brown | Jul 2008 | A1 |
20080162583 | Brown | Jul 2008 | A1 |
20080172419 | Richards | Jul 2008 | A1 |
20080244592 | Uchihira | Oct 2008 | A1 |
20080306950 | Richards et al. | Dec 2008 | A1 |
20090183162 | Kindel et al. | Jul 2009 | A1 |
20090328040 | Miller et al. | Dec 2009 | A1 |
20100125565 | Burger et al. | May 2010 | A1 |
20100242050 | Chan | Sep 2010 | A1 |
20100281285 | Blanding | Nov 2010 | A1 |
20110270822 | Denton et al. | Nov 2011 | A1 |
20120047158 | Lee et al. | Feb 2012 | A1 |
20120054175 | Barsness et al. | Mar 2012 | A1 |
20120054756 | Arnold et al. | Mar 2012 | A1 |
Entry |
---|
Beyer et al., “Protecting the Quality of Service of Existing Information Systems”, Computer Sciences Department, University of Wisconsin, 2003, pp. 1-10. |
Nikolaou et al., “Transaction Routing for Distributed OLTP Systems: Survey and Recent Results”, Department of Computer Science, University of Crete and Institute of Computer Science, 2002, pp. 1-26. |
Sinnwell et al., “Managing Distributed Memory to Meet Multiclass Workload Response Time Goals”, Department of Computer Science, University of the Saarland, 2002, pp. 1-8. |
Oracle, “ORACLE91 Database Resoruce Manager”, Technical Whitepaper, 2001, pp. 1-11. |
Finkelstein, Computer Science Department, Stanford University, “Common Expression Analysis in Database Applications,” 1982, pp. 235-245. |
Sellis, University of California, Berkeley, “Multiple-Query Optimization,” ACM Transactions on Database Systems, vol. 13, No. 1, Mar. 1988, pp. 23-52. |
Brown et al., U.S. Appl. No. 12/317,836 entitled “Database System Having a Service Level Goal Responsive Regulator” filed Dec. 30, 2008 (24 pages). |
Brown et al., U.S. Appl. No. 12/317,985 entitled “Database System Having Regulator That Performs Workload Regulation Based on Optimizer Estimates” filed Dec. 30, 2008 (26 pages). |
Brown et al., U.S. Appl. No. 12/482,780 entitled “Database System Having a Regulator to Provide Feedback Statistics to an Optimizer” filed Jun. 11, 2009 (33 pages). |
Burger et al., U.S. Appl. No. 12/908,052 entitled “Generating an Integrated Execution Plan for Multiple Database Requests” filed Oct. 20, 2010 (34 pages). |
Brown et al., U.S. Appl. No. 12/942,466 entitled “Adjusting a Resource Estimate in Response to Progress of Execution of a Request” filed Nov. 9, 2010 (36 pages). |
Richards et al., U.S. Appl. No. 12/942,480 entitled “Managing Execution of Requests Using Information Relating to a Job” filed Nov. 9, 2010 (42 pages). |
Richards et al., U.S. Appl. No. 12/945,072 entitled “Calculating a Throttle Limit for Requests in a Database System” filed Nov. 12, 2010 (44 pages). |
Brown, Douglas R., et al., U.S. Appl. No. 12/952,561 entitled Rejecting a Request in a Database System filed Nov. 23, 2010 (37 pages). |
Richards, Anita et al., U.S. Appl. No. 12/942,480 entitled Managing Execution of Requests Using Information Relating to a Job filed Nov. 9, 2010 (42 pages). |