Information
-
Patent Grant
-
6438650
-
Patent Number
6,438,650
-
Date Filed
Wednesday, December 16, 199826 years ago
-
Date Issued
Tuesday, August 20, 200222 years ago
-
Inventors
-
Original Assignees
-
Examiners
Agents
-
CPC
-
US Classifications
Field of Search
US
- 711 140
- 711 141
- 711 146
- 711 144
- 711 100
- 711 118
- 711 154
-
International Classifications
-
Abstract
A system for processing caches misses includes a request miss buffer, secondary miss logic, and a request identifier buffer. When a request misses in a cache, information characterizing the request is provided to the request miss buffer and the secondary miss logic. The secondary miss logic determines whether the request may be merged with a pending bus transaction, and provides the request identifier buffer with a pointer to the request information. The pointer is stored at an entry associated with the pending bus transaction. For a load request, data returned by the bus transaction is routed to a targeted register, using the request information in the request miss buffer.
Description
BACKGROUND OF THE INVENTION
1. Technical Field
The present invention relates to systems and methods for bus transactions, and in particular, to systems and methods for handling bus transactions to a memory system.
2. Background Art
Modern high-performance processors are designed to execute multiple instructions on each clock cycle. To this end, they typically include extensive execution resources to facilitate parallel processing of the instructions. To be effective, the execution resources must receive data and instructions at a rate comparable to the rate at which the processor executes instructions. It is the function of the memory system to keep the processor's execution resources supplied with data and instructions.
The memory system typically includes a hierarchy of caches, e.g. L0, L1, L2 . . . , and a main memory. The storage capacities of the caches generally increase from L0 to L2, et seq., as does the time required by succeeding caches in the hierarchy to return data to the processor. For example, an L0 cache may return data in 1 or 2 clock cycles, an L1 cache may return data in 4 to 8 clock cycles, and an L2 cache may return data in 10 or more clock cycles. A data request propagates through the cache hierarchy, beginning with the smallest, fastest structure, until the data is located or the caches are exhausted. In the latter case, the requested data is returned from main memory. The latency for a transaction to main memory can be on the order of 100-200 clock cycles.
Even with the best system designs, cache misses occur and data must be retrieved from main memory. The significant penalty for cache misses (100-200 clock cycles) places a premium on handling transaction to main memory efficiently. For example, when a request to load an operand misses in a cache, the operand is typically returned to the cache along with data from adjacent memory addresses. Enough data is returned to fill one or more “lines” of the cache, i.e. one or more cache lines. The spatially local nature of most programs means that the data from adjacent memory addresses is likely to be requested as well. If the data from an adjacent address is requested before the cache line is returned, multiple bus transactions may be generated to the same cache line in memory.
Where there are multiple misses to a single cache line, it is inefficient to generate a separate cache request for each miss. Separate requests consume bandwidth on the memory bus, and if they target the same location in memory, they can not be pipelined without adding transaction management hardware. On the other hand, combining multiple cache misses that target data in the same cache line into a single bus transaction places demands on the bus controller. For example, the target bytes within the cache line need to be identified along with the register for which they are destinted. The signal lines necessary to characterize the requests for the bus controller consume area on the silicon die, and provide additional constraints on signal routing.
The present invention addresses these and other problems associated with combining multiple data requests into a single bus transaction.
SUMMARY OF THE INVENTION
A system and method are provided for efficiently processing a bus transaction generated by a cache miss.
On a cache miss, information on the data request that missed in the cache is mapped to a bus transaction. The information is stored and a pointer to the stored information is forwarded to a bus controller for servicing by the mapped bus transaction.
For one embodiment of the invention, the data request information is provided to a load miss buffer for storage and to secondary miss system for mapping. The secondary miss system provides the bus controller with a pointer to the request information. The pointer is stored in an external bus logic buffer, at an entry associated with the mapped bus transaction.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention may be understood with reference to the following drawings, in which like elements are indicated by like numbers. These drawings are provided to illustrate selected embodiments of the present invention and are not intended to limit the scope of the invention.
FIG. 1
is a block diagram of computer system that is suitable for implementing the present invention.
FIG. 2
is a detailed block diagram of one embodiment of the memory system of FIG.
1
.
FIG. 3
is a block diagram of one embodiment of the load miss buffer of FIG.
2
.
FIGS. 4A and 4B
are block diagrams of embodiments of the bus request queue and LMB_ID buffers of FIG.
2
.
FIGS. 5A and 5B
are schematic representations of pipelines associated with one embodiment of the second miss system of FIG.
2
.
FIG. 6
is a flowchart of a method in accordance with the present invention for detecting long latency pipeline stalls.
DETAILED DISCUSSION OF THE INVENTION
The following discussion sets forth numerous specific details to provide a thorough understanding of the invention. However, those of ordinary skill in the art, having the benefit of this disclosure, will appreciate that the invention may be practiced without these specific details. In addition, various well-known methods, procedures, components, and circuits have not been described in detail in order to focus attention on the features of the present invention.
The present invention provides a system and method for efficiently processing transaction requests (“requests”) to a system or main memory. On a cache miss, a cache controller stores information characterizing the request (request information) in a buffer. The request information is also provided to a secondary miss system, which identifies a bus transaction to service the request. The secondary miss system provides a bus controller with a pointer to the stored request information, and the pointer is associated with the identified bus request. Providing a pointer to the request information rather than the information itself reduces the number of signals that are provided to the bus controller. The identified bus transaction may be a pending transaction triggered by an earlier cache miss to the same cache line targeted by the current request. In this case, the current request is mapped to the pending bus transaction, which processes requests for multiple cache misses, reducing the amount of traffic on the system bus.
When the request is a load, load data is returned by the bus transaction. The returned load data is forwarded to the cache controller, along with pointers to request information for each request serviced by the bus transaction. The cache controller uses the stored request information to route the returned data to appropriate destination register(s). For one embodiment of the invention, a store request that misses in the cache(s) also triggers a bus transaction that returns data, e.g. one or more cache lines, from the memory. Data specified by the store request is merged into the returned cache line, and the cache(s) is updated with the merged data.
FIG. 1
is a block diagram of one embodiment of a system
100
that is suitable for implementing the present invention. System
100
includes a processor
102
, main memory
170
, one or more peripheral device(s)
106
, graphics system
108
, and system logic
104
. Processor
102
, main memory
170
, graphic system
108
, and peripheral device(s)
106
are coupled to system logic
104
through a system or front side bus (FSB)
103
, a memory bus
105
, a graphics port
107
, and a peripheral bus
109
, respectively. System logic
104
manages the transfer of data among the various components of system
100
.
For one embodiment of system
100
, graphics port
107
may be an advanced graphics port (AGP). Peripheral bus
109
may be, for example, one or more peripheral component interface (PCI), industry standard architecture (ISA), extended ISA (EISA), or comparable buses. While system
100
is shown with a single processor
100
, the present invention may be implemented in multi-processor systems as well.
The disclosed embodiment of processor
102
includes execution resources
110
, a first cache (L0)
120
, a second cache (L1)
130
, a cache controller
150
, and a bus controller
160
. The disclosed embodiment also includes an off-chip cache (L2)
140
that is coupled to processor
102
through a back side bus (BSB)
144
. Bus controller
160
manages the flow of data between processor
102
and L2 cache
140
and between processor
110
and main memory
170
. L0 cache
120
, L1 cache
130
, L2 cache
140
, cache controller
150
, bus controller
160
and main memory
170
form a memory system that provides data and instructions to processor
102
. In the disclosed embodiment, L0 and L1 caches
120
and
130
, respectively, are located “on-chip”, e.g., with execution hardware
110
, while L2 cache
140
is located “off chip”. For other embodiments of system
100
, L2 cache may be located on-chip, and additional or fewer levels of cache may be included in the memory system. The present invention does not depend on these details of system
100
.
During operation, execution resources
110
implement instructions that are provided by various components of the memory system. The instructions operate on data (operands) that are provided from a register file
114
or bypassed to execution resources
110
from various components of the memory system. Operand data is transferred to and from register file
114
through load and store instructions, respectively. A load instruction may be implemented in one or two clock cycles if the data is available in L0 cache
120
. If the data request misses in L0 cache
120
, it is forwarded to successive cache structures in the memory system, until the data is located. For example, a cache miss in L0
120
triggers a request to L1 cache
130
. A miss there triggers a request to L2 cache
140
, and a miss there triggers a request to main memory
170
. In some instances, a request to a higher level structure in the memory hierarchy may be initiated before a miss in a lower level structure is detected. This compensates for the greater latency of the higher order memory structures.
For superscalar processors, execution resources
110
may include multiple execution units for each of the different types of operations. For example, execution resources
110
may include multiple integer, floating point, and multi-media execution units, which can operate in parallel. Operand data for these execution units may be coupled from/to the memory system by one or more load/store execution units. Consequently, even when a cache miss delays processing in one execution unit, independent instructions processed by different execution units may continue. At any given time, multiple cache misses may be outstanding.
FIG. 2
is a more detailed block level diagram of one embodiment of the memory system shown in FIG.
1
. In the disclosed embodiment, L0 cache
120
includes separate data and instruction caches
210
,
220
, respectively, although this is not required for the present invention. A buffer
230
receives transaction requests that miss in L0 cache
120
and provides them to L1 cache
130
on a first-in, first-out (FIFO) basis. FIFO
230
provides request information to a load miss buffer (LMB)
240
and store data buffer (SDB)
244
for load and store requests, respectively, that miss in L1 cache
130
. In addition, request information is provided to secondary miss system
250
, which determines whether the request can be processed by bus controller
160
. For one embodiment of the memory system, an incoming request may be coalesced with a pending bus transaction, it may trigger a new bus transaction, or it may be returned to FIFO
230
for processing at a later time. For the disclosed embodiment, secondary miss system
250
determines which alternative is implemented and provides the request information to bus controller
160
accordingly.
For the disclosed embodiment of the memory system, bus controller
160
includes backside bus logic (BBL)
280
and external bus logic (EBL)
290
to implement bus transactions on BSB
144
and FSB
103
, respectively. Bus controller
160
also includes a bus request queue (BRQ)
260
and an LMB_ID buffer (LIB)
270
to track these bus transactions. For one embodiment of bus controller
160
, BRQ
260
includes an entry for each bus transaction, in which the memory address and status of the transaction are tracked, and LIB
270
includes an entry for each bus transaction that indicates the requests serviced by the bus transaction. The requests serviced by the transaction may be indicated by a pointer to a corresponding entry in LMB
240
or SDB
244
.
FIFO
230
receives a transaction request when the request misses in a lower level cache, e.g. L0D
210
in the disclosed embodiment. The transaction request specifies sufficient information to complete the requested transaction. This information may include, for example, the type and target address of the transaction and the size and memory attribute(s) of the data being transferred. For a load transaction, the register to which the data will be returned is also specified. The address of the targeted data (target address) may be specified by the address of the cache line that contains the data and an offset to the specific byte(s) of the cache line that are sought.
An exemplary load may target eight bytes for a specified register in register file
114
. If the data is unavailable in L0 cache
120
, one embodiment of the memory system returns 32 bytes of data, e.g. an L0 cache line, to L0 cache
120
. The data may be returned from L1 cache
130
, L2 cache
140
or main memory
170
, depending on where the targeted data is found. Data is ultimately provided from memory in block sizes of up to 64 bytes. An offset specified in the transaction request indicates which bytes are returned to the specified register. As discussed in greater detail below, the remaining bytes returned by the bus transaction may be earmarked for registers specified by different transaction requests.
FIFO
230
provides request information to secondary miss system
250
, which determines how the transaction request is handled. For example, secondary miss system
250
determines whether a pending bus transaction tracked by bus controller
160
can service the transaction request. One embodiment of secondary miss system
250
checks BRQ
260
to determine whether there is a pending bus transaction that has the same type and target address as the transaction request. Provided the bus transaction has not returned the first block of data targeted by the transaction, the request may be merged with the pending transaction.
Additional criteria may also be used to determine whether to coalesce the transaction request with the pending bus transaction. If the criteria are met, secondary miss system
250
provides bus controller
160
with a pointer to the request information. Bus controller
160
stores the pointer in an entry of LIB
270
that is associated with the pending bus transaction. Providing a pointer to the request information instead of the request information itself reduces the number of signals that are routed to bus controller
160
. If there is no pending bus transaction of the correct type or memory address, a new bus transaction may be scheduled by bus controller
160
. If the pending bus transaction is present, but the other criteria are not met, the transaction request may be returned to FIFO
230
for subsequent processing.
FIG. 3
shows one embodiment of LMB
210
. The disclosed embodiment of LMB
210
comprises multiple entries
310
. Each entry includes fields
314
(
1
)-
314
(
3
) in which information that characterizes a corresponding request (request information or REQ_INFO) may be stored. A load identifier (LOAD_ID) is associated with each entry to simplify tracking. For one embodiment of LMB
210
, each entry
310
includes REG_ID, DATA_SIZE, and ENDIAN fields
314
. For example, REG_ID specifies a register in register file
114
for which the requested data is destined, DATA_SIZE specifies the size of the data block in the request, and ENDIAN indicates how the data is stored in a register or memory location, e.g. big endian or little endian.
Additional information for fully characterizing a transaction request includes REQ_TYP, MEM_ATT, MEM_ADD, and BYTE_EN. REQ_TYP indicates the type of request, e.g. load or store request, MEM_ATT indicates the type of data, MEM_ADD indicates the memory address of the data block, and BYTE_EN indicates which byte(s) of data in the block is targeted. For one embodiment, MEM_ADD points to the address in main memory
170
of the first byte in a (cache) line of data, and BYTE_EN provides an offset indicating which byte in the indicated cache line is targeted by the request. MEM_ATT indicates various attributes of the stored data, such as whether the data is uncacheable (UC), uncacheable coherent (UCC), write-back (WB), write-through (WT), write-protected (WP), or write-combining (WC) data.
LOAD_IDs may be specified explicitly or implicitly. In the first case, LMB
240
includes a separate field in which the LOAD_ID is stored. In the second case, the LOAD_ID is implicit in the order in which entries
310
appear in LMB
340
. The latter approach has the advantage of requiring less die area for LMB
340
than the former approach. SDB
244
is similar to LMB
240
, with an entry, indicated by a STORE_ID, for each outstanding store request.
FIG. 4A
shows one embodiment of BRQ
260
for tracking the status of bus transactions on BSB
144
and FSB
103
. BRQ
260
includes an entry
410
for each pending bus transaction. Each entry has associated fields
414
to indicate a target memory address (MEM_ADD) of the request, the targeted bytes (BYTE_EN), the validity (VALID), and status (STATUS) of the request. VALlD indicates whether the transaction corresponding to the entry is a valid bus request, and STATUS indicates whether the transaction is on FSB
103
or BSB
144
. For example, STATUS may include a first bit that is set to indicate that the corresponding transaction is slated for FSB
103
and a second bit that is set to indicate that the transaction is waiting for access to FSB
103
. For one embodiment, an entry
410
is initialized in BRQ
260
when a bus request is initiated on BSB
144
, with MEM_ADD provided by secondary miss system
250
.
FIG. 4B
shows one embodiment of LIB
270
, which represents the mapping of LOAD_IDs (STORE_IDs) to bus transactions. LIB
270
has an entry
450
for each bus request supported by BRQ
260
. Each entry
450
includes a LOAD_ID for any transaction request that it services. For example, where a bus cycle can return two 32 byte blocks of data (64 bytes), up to four 8 byte transaction requests may be serviced by a single bus transaction. LIB
270
provides a mapping between the bus transaction and one or more requests. LMB
240
provides a mapping between the load request and a corresponding target register. Since bus transactions do not require register level information to be properly implemented, this information is not communicated to bus controller
160
, simplifying the signal routing.
The disclosed embodiment of LIB
270
has entries
450
configured in the same order as entries
410
in BRQ
260
, with each entry
450
identifying up to, e.g., 8 LOAD_IDs. The common ordering of entries in BRQ
260
and LIB
270
eliminates the need to index entries
450
to entries
410
.
For cacheable data, store transactions behave similarly to load transactions on FSB
103
. For example, on a store miss, the data to be stored may be held by the cache controller while the cache line that includes the address of the store data is returned from memory.
FIG. 5A
shows one embodiment of a pipeline
510
through which the various components of secondary miss system
250
process transaction requests for a dual-ported L1 cache
130
. The disclosed embodiment supports two ports for handing transaction requests, but this is not necessary for the present invention. Pipeline
510
is shown divided into pipe stages
512
,
514
,
516
to indicate when different operations associated with a request are processed. The disclosed embodiment of secondary miss system
250
includes a BRQ CAM
520
and comparators
530
that operate on a given request in pipe stage
512
. The outputs of these devices are provided to secondary miss logic
540
, which processes the request in pipestage
514
. Depending on the signals input to secondary miss logic
540
, selected portions of the request information may be written to BRQ
260
and LIB
270
in pipe stage
516
.
For one embodiment of secondary miss module
250
, BRQ CAM
520
is a content addressable memory. The output of BRQ CAM
520
points to an entry in BRQ
260
and LIB
270
when a memory address applied to its input matches a target address for which a bus transaction has already been requested. BRQ CAM
520
thus determines whether a transaction request in pipe stage
512
hits a pending bus transaction. This information is provided to secondary miss logic
150
for further analysis, as described below.
Comparators
530
are provided to determine whether the request in pipe stage
512
hits on any “in-flight” requests, e.g. requests in pipe stages
514
and
516
. For example, comparators
530
(
a
) and
530
(
d
) compare the target address of the port
0
request in pipe stage
512
with those of any port
0
requests in pipe stages
514
and
516
, respectively. Similarly, comparators
530
(
b
) and
530
(
c
) compare the target address of the port
0
request in pipe stage
512
with those of any port
1
requests in pipe stages
514
and
516
. The results of comparators
530
and BRQ CAM
520
are used by secondary miss logic
540
to determine whether to issue the request in pipe stage
512
to bus controller
160
.
An embodiment of secondary miss module
150
having two register file ports for load/store transactions allows up to two transaction requests per “chunk” of data. Other embodiments may include more or fewer register ports for memory transactions, with corresponding limits on the numbers of transactions per “chunk”. For a 64-bit processor, a data “chunk” of 16 bytes accommodates data returns for two different registers. In the disclosed embodiment, if two or more requests already target the same “chunk” as the request in pipe stage
512
, secondary miss logic
150
rejects the request in pipe stage
512
, which is then resubmitted for processing. For one embodiment, the rejected request is accepted only after the current outstanding transactions have completed.
Table 1 summarizes the output of secondary miss logic
540
for a port
0
request in pipe stage
512
and various combinations of “in-flight” requests and requests serviced by pending bus transactions. A logical 1 at the output of secondary miss logic
640
(column 6) indicates that the bus request is accepted for processing by bus controller
160
, and a logical 0 at the output indicates that the bus request is rejected. In Table 1, ID refers to the value of a counter that tracks the number of requests targeted to a particular “chunk” of data, and X indicates “don't care”.
TABLE 1
|
|
PS 514
PS 514
PS 516
PS 516
Accept/Reject Request for
|
(P0)
(P1)
(P0)
(P1)
BRQ
Merging
|
|
0
0
0
0
0
Accept
|
0
0
0
0
1
Accept
|
0
0
0
1
0
Accept
|
0
0
0
1
1
Accept if 1D < 2
|
0
0
1
0
0
Accept
|
0
0
1
0
1
Accept if 1D < 2
|
0
0
1
1
0
Reject
|
0
0
1
1
1
Reject
|
0
1
0
0
0
Accept
|
0
1
0
0
1
Accept if 1D < 2
|
0
1
0
1
0
Reject
|
0
1
0
1
1
Reject
|
0
1
1
0
0
Reject if 1D = 0
|
0
1
1
0
1
Reject
|
0
1
1
1
0
Reject
|
0
1
1
X
1
Reject
|
1
0
0
0
0
Accept
|
1
0
0
0
1
Accept if 1D = 0
|
1
0
0
1
X
Reject
|
1
0
1
X
X
Reject
|
1
1
X
X
X
Reject
|
|
FIG. 5B
shows a corresponding pipeline
510
′ for a request input on port
1
of a dual ported L1 cache
130
. The main difference between pipelines
510
and
510
′ is that the port
1
request accesses BRQ CAM
520
at pipe stage
514
. This eliminates the need to consider the port
1
request in pip stage
514
. The conditions for accepting a request for merging are summarized in
TABLE 2
|
|
PS 514
PS 516
PS 516
Accept/Reject Request for
|
(P0)
(P0)
(P1)
BRQ
Merging
|
|
0
0
0
0
Accept
|
0
0
0
1
Accept if 1D = 1
|
0
0
1
0
Accept
|
0
0
1
1
Reject
|
0
1
0
0
Accept
|
0
1
0
1
Reject
|
0
1
1
0
Reject
|
0
1
1
1
Reject
|
1
0
0
0
Accept
|
1
0
0
1
Accept if 1D = 0
|
1
0
1
X
Reject
|
1
1
X
X
Reject
|
|
FIG. 6
is a flowchart representing a method in accordance with the present invention, for processing memory requests. When a cache miss is detected
610
, request information (REQ_INFO) is stored in a buffer, e.g. LMB
240
. REQ_INFO is also tested
630
to determine whether it hits
630
a pending bus transaction. For example, the target address and transaction type of the request may match those of a pending bus transaction. Here, a bus transaction is referred to as pending if it has yet to return the first chunk of data. If the request does not hit
640
a pending transaction, a new bus transaction is generated
650
and a pointer to the REQ_INFO is assigned
670
to bus transaction.
For one embodiment of method
600
, if the request hits
640
a pending transaction, the request may be serviced by, e.g. merged with, a pending bus transaction (dashed line). A pointer to the stored REQ_INFO is assigned
670
to the pending transaction. For example, the pointer may be transferred to the bus controller and stored in an entry associated with the bus transaction, e.g. in LIB
270
.
For another embodiment of method
600
, there may be additional constraints on merging the request with a hit transaction. For example, the target address may identify a chunk of data that is already targeted by a maximum number of transactions. As noted above, one embodiment of secondary miss logic
540
allows up to two requests to a 16 byte chunk of data in a pending transaction. For this embodiment, the request is tested
660
for any conflicts within the pending transaction. If a conflict is detected
660
, the request is rejected
664
. It may be resubmitted for processing later. If no conflict is detected
660
, the REQ_INFO pointer is assigned
670
to the transaction, and method
600
waits for the transaction to complete
674
.
Once the transaction completes
674
, the REQ_INFO is accessed
678
, using the pointer associated with the transaction. The REQ_INFO is used differently, depending on whether the requested transaction is a load or store. If the transaction is determined
680
to be a load, data returned by the transaction is distributed
684
to the register identified by the accessed REQ_INFO. If the transaction is determined
680
to be a store transaction, the data returned from the bus transaction is merged
688
with the store data specified by the request, and the merged data is returned
690
to the cache(s).
There has thus been provided a system and method for processing bus transactions efficiently. Information characterizing a transaction request is analyzed by secondary miss system and a pointer to the information is forwarded to the bus controller, when the transaction is accepted. Use of a pointer significantly reduces the number of signals that must be provided to the bus controller to process a transaction. The transaction information itself is stored in a separate buffer, where it may be accessed through the corresponding pointer. The secondary miss logic uses the request information to determine whether the corresponding request (1) may be merged with a pending transaction, (2) generates a new transaction, or (3) is returned for later processing.
The invention has been illustrated with reference to specific embodiments. However, persons skilled in the art and having the benefit of this disclosure will recognize various modifications of these embodiments that fall within the spirit of the appended claims.
Claims
- 1. A method for processing a cache miss comprising:storing in a transaction miss buffer a register specifier associated with a data request that misses in a cache; comparing an address targeted by the data request with a pending bus transaction to a higher level memory structure; associating a pointer to the register specifier with the pending bus transaction if the targeted address hits the pending bus transactions; processing the bus transaction; reading the register specifiers indicated by the pointers associated with the bus transaction; and providing data returned by the bus transaction to registers indicated by the register specifiers.
- 2. The method of claim 1, further comprising:initiating a new bus transaction if the target address does not hit on a pending bus transaction; and associating a pointer to the register specifier with the new bus transaction.
- 3. The method of claim 1, further comprising repeating storing, comparing, and associating for any additional requests to a cache line targeted by the bus transaction until the bus transaction is launched.
- 4. The method of claim 1, further comprising:determining whether resources are available to process a merged transaction if a target address hits a pending bus transaction; and returning the data request for later processing if resources are not available to process the merged transaction.
- 5. The method of claim 4, wherein determining resource availability comprises determining whether a selected number of requests already specify a data chunk targeted by the request.
- 6. A system for processing bus transaction requests comprising:a transaction request buffer; a transaction miss buffer to store a register specifier for a transaction request from the transaction request buffer; a bus controller including a bus request queue (BRQ) to track a status associated with a pending bus transaction to one or more higher memory structures and a pointer table to store a pointer to the register specifier in an entry associated with the pending bus transaction if the bus controller can process the request; and a secondary miss system to detect an address targeted by the transaction request from the transaction request buffer, to determine whether the request may be processed by the bus controller, and to provide the bus controller with the pointer to the register specifier in the transaction miss buffer if it is determined that the request may be processed by the bus controller.
- 7. The system of claim 6, wherein the transaction request buffer forwards request information to the secondary miss system and the transaction miss buffer if the request misses in a cache.
- 8. The system of claim 6, wherein the secondary miss system includes a circuit to compare the target address specified by the request with a target address for which a bus transaction is in process.
- 9. The system of claim 8, wherein the circuit provides the bus controller with the pointer to the register specifier if the target address specified by the request matches a target address for the bus transaction.
- 10. A computer system comprising:execution resources; a cache to provide operands to the execution resources; a load miss buffer to store register specifiers for requests that miss in the cache; a bus controller including a bus request queue (BRQ) to monitor status for one or more pending bus transactions to higher memory structures and an ID buffer to store pointers to register specifiers for the pending bus transactions; and a secondary miss system to compare a target address for a request with addresses targeted by the one or more pending bus transactions and to provide a pointer to a register specifier for the request to an ID buffer entry associated with a pending bus transaction hit by the comparison.
- 11. The computer system of claim 10, wherein the secondary miss system compares the target address for a request with target addresses of pending bus transactions and in-flight requests to determine whether to forward the pointer to the bus controller.
- 12. A memory system comprising:a cache to service a request; a main memory; a load miss buffer to store a register specifier for a request; a bus controller to process a bus transaction to the main memory if the request misses in the cache; and means to compare an address targeted by the request with an address targeted by the pending transaction and to provide the pointer to the bus controller when the addresses match and the pending transaction has not returned a first data block.
US Referenced Citations (4)