The following description relates generally to memory modules, such as dual in-line memory modules (DIMMs), and more particularly to a memory module architecture that has multiple data channels (i.e., a multi-data channel memory module architecture). In certain embodiments, a memory module comprises a plurality of data channels that each enable a sub-cache-block of data to be accessed for independent operations. Further, in certain embodiments, multiple ones of the data channels may be employed to support a cache-block access of data.
The popularity of computing systems continues to grow and the demand for improved processing architectures thus likewise continues to grow. Ever-increasing desires for improved computing performance/efficiency has led to various improved processor architectures. For example, multi-core processors are becoming more prevalent in the computing industry and are being used in various computing devices, such as servers, personal computers (PCs), laptop computers, personal digital assistants (PDAs), wireless telephones, and so on.
In the past, processors such as CPUs (central processing units) featured a single execution unit to process instructions of a program. More recently, computer systems are being developed with multiple processors in an attempt to improve the computing performance of the system. In some instances, multiple independent processors may be implemented in a system. In other instances, a multi-core architecture may be employed, in which multiple processor cores are amassed on a single integrated silicon die. Each of the multiple processors (e.g., processor cores) can simultaneously execute program instructions. This parallel operation of the multiple processors can improve performance of a variety of applications.
A multi-core CPU combines two or more independent cores into a single package comprised of a single piece silicon integrated circuit (IC), called a die. In some instances, a multi-core CPU may comprise two or more dies packaged together. A dual-core device contains two independent microprocessors and a quad-core device contains four microprocessors. Cores in a multi-core device may share a single coherent cache at the highest on-device cache level (e.g., L2 for the Intel® Core 2) or may have separate caches (e.g. current AMD® dual-core processors). The processors also share the same interconnect to the rest of the system. Each “core” may independently implement optimizations such as superscalar execution, pipelining, and multithreading. A system with N cores is typically most effective when it is presented with N or more threads concurrently.
One processor architecture that has been developed utilizes multiple processors (e.g., multiple cores), which are homogeneous. As discussed hereafter, the processors are homogeneous in that they are all implemented with the same fixed instruction sets (e.g., Intel's x86 instruction set, AMD's Opteron instruction set, etc.). Further, the homogeneous processors access memory in a common way, such as all of the processors being cache-line oriented such that they access a cache block (or “cache line”) of memory at a time, as discussed further below.
In general, a processor's instruction set refers to a list of all instructions, and all their variations, that the processor can execute. Such instructions may include, as examples, arithmetic instructions, such as ADD and SUBTRACT; logic instructions, such as AND, OR, and NOT; data instructions, such as MOVE, INPUT, OUTPUT, LOAD, and STORE; and control flow instructions, such as GOTO, if X then GOTO, CALL, and RETURN. Examples of well-known instruction sets include x86 (also known as IA-32), x86-64 (also known as AMD64 and Intel® 64), AMD's Opteron, VAX (Digital Equipment Corporation), IA-64 (Itanium), and PA-RISC (HP Precision Architecture).
Generally, the instruction set architecture is distinguished from the microarchitecture, which is the set of processor design techniques used to implement the instruction set. Computers with different microarchitectures can share a common instruction set. For example, the Intel® Pentium and the AMD® Athlon implement nearly identical versions of the x86 instruction set, but have radically different internal microarchitecture designs. In all these cases the instruction set (e.g., x86) is fixed by the manufacturer and directly hardware implemented, in a semiconductor technology, by the microarchitecture. Consequently, the instruction set is traditionally fixed for the lifetime of this implementation.
As shown further in
In many system architectures, each core 104A and 104B will have its own cache also, commonly called the “L1” cache, and cache 103 is commonly referred to as the “L2” cache. Unless expressly stated herein, cache 103 generally refers to any level of cache that may be implemented, and thus may encompass L1, L2, etc. Accordingly, while shown for ease of illustration as a single block that is accessed by both of cores 104A and 104B, cache 103 may include L1 cache that is implemented for each core.
In many system architectures, virtual addresses are utilized. In general, a virtual address is an address identifying a virtual (non-physical) entity. As is well-known in the art, virtual addresses may be utilized for accessing memory. Virtual memory is a mechanism that permits data that is located on a persistent storage medium (e.g., disk) to be referenced as if the data was located in physical memory. Translation tables, maintained by the operating system, are used to determine the location of the reference data (e.g., disk or main memory). Program instructions being executed by a processor may refer to a virtual memory address, which is translated into a physical address. To minimize the performance penalty of address translation, most modern CPUs include an on-chip Memory Management Unit (MMU), and maintain a table of recently used virtual-to-physical translations, called a Translation Look-aside Buffer (TLB). Addresses with entries in the TLB require no additional memory references (and therefore time) to translate. However, the TLB can only maintain a fixed number of mappings between virtual and physical addresses; when the needed translation is not resident in the TLB, action will have to be taken to load it in.
As an example, suppose a program's instruction stream that is being executed by a processor, say processor core 104A of
In operation, each of cores 104A and 104B reference main memory 101 by providing a physical memory address. The physical memory address (of data or “an operand” that is desired to be retrieved) is first presented to cache 103. If the addressed data is not encached (i.e., not present in cache 103), the same physical address is presented to main memory 101 to retrieve the desired data. Main memory 101 may be implemented in whole or in part via memory module(s), such as dual in-line memory modules (DIMMs), which may employ dynamic random access memory (DRAM) or other memory storage.
In contemporary architectures, the processor cores 104A and 104B are cache-line (or “cache-block”) oriented, wherein a “cache block” is fetched from main memory 101 and loaded into cache 103. The terms cache line and cache block are used interchangeably herein. Rather than retrieving only the addressed data from main memory 101 for storage to cache 103, such cache-block oriented processors may retrieve a larger block of data for storage to cache 103. A cache block typically comprises a fixed-size amount of data that is independent of the actual size of the requested data. For example, in most implementations a cache block comprises 64 bytes of data that is fetched from main memory 101 and loaded into cache 103 independent of the actual size of the operand referenced by the requesting micro-core 104A/104B. Furthermore, the physical address of the cache block referenced and loaded is a block address. This means that all the cache block data is in sequentially contiguous physical memory. Table 1 below shows an example of a cache block.
In the above example of table 1, the “XXX” portion of the physical address is intended to refer generically to the corresponding identifier (e.g., numbers and/or letters) for identifying a cache line address. For instance, XXX(0) corresponds to the physical address for an Operand 0, while XXX(1) corresponds to the physical address for an Operand 1, and so on. In the example of table 1, in response to a micro-core 104A/104B requesting Operand 0 via its corresponding physical address XXX(0), a 64-byte block of data may be fetched from main memory 101 and loaded into cache 103, wherein such cache block of data includes not only Operand 0 but also Operands 1-7. Thus, depending on the fixed size of the cache block employed on a given system, whenever a core 104A/104B references one operand (e.g., a simple load), the memory system will bring in 4 to 8 to 16 (or more) operands into cache 103.
There are both advantages and disadvantages of this traditional cache-block oriented approach to memory access. One advantage is that if there is temporal (over time) and spatial (data locality) references to operands (e.g., operands 0-7 in the example of Table 1), then cache 103 reduces the memory access time. Typically, cache access times (and data bandwidth) are 50 times faster than similar access to main memory 101. For many applications, this is the memory access pattern.
However, if the memory access pattern of an application is not sequential and/or does not re-use data, inefficiencies arise which result in decreased performance. Consider the following FORTRAN loop that may be executed for a given application:
In this loop, every fourth element is used. If a cache block maintains 8 operands, then only 2 of the 8 operands are used. Thus, 6/8 of the data loaded into cache 103 and 6/8 of the memory bandwidth is “wasted” in this example.
In multi-processor systems, such as exemplary system 100 of
Traditional DIMMs provide one data channel 205 and one address/control channel 204 per DIMM. In general, the address/control channel 204 specifies an address and a desired type of access (e.g., read or write), and the data channel 205 carries the corresponding data to/from the specified address for performing the desired type of access. Typically, a memory access operation requires several clock cycles to perform. For instance, address and control information may be provided on the address/control channel 204 over one or more clock cycles, and then the data is provided on the data channel 205 over later clock cycles. In a typical DIMM access scenario, a row select command is sent from memory controller 201 on the address/control channel 204 to the memory module 202, which indicates that an associated address is a row address in the memory cell matrix of the DRAM memory 203. In general, a data bit in DRAM is stored in a memory cell located by the intersection of a column address and a row address. A column access command (e.g., a column read or column write command) is sent from the memory controller 201 over the address/control channel 204 to validate the column address and indicate a type of access desired (e.g., either a read or write operation).
The row select command may be sent in a first clock cycle, then the column access command may be sent in a second clock cycle, and then some clock cycles later a burst of data may be supplied via the data channel 204. The burst of data may be supplied over several clock cycles. Typically, single DIMM data channel 205 is typically a 64-bit (8-byte) wide channel, wherein each access comprises a “burst” length of 8, thus resulting in the data channel carrying 64 bytes for each access. The length of the “burst” may refer to a number of clock cycles or phases of a clock cycle when dual-data rate (DDR) is employed. For instance, a burst length of 8 may refer to 8 clock cycles, wherein 8 bytes of data is communicated on the data channel for a given access in each of the 8 clock cycles (resulting in the data channel carrying 64 bytes of data for the access). As another example, a burst length of 8 may refer to 8 phases of a clock (e.g., when DDR is employed), wherein 8 bytes of data is communicated on the data channel for a given access in each of the 8 phases (over 4 clock cycles), thus resulting in the data channel carrying 64 bytes of data for the access.
To improve data channel bandwidth, tiling is commonly employed in memory architectures. For instance, rather than waiting for completion of a burst of data for one access operation before supplying address/control signals for a next access operation, the instructions supplied via the address/control channel 204 may be used to attempt to maintain full bandwidth utilization of the data channel 205.
The exemplary tiling technique of
A second memory access operation is requested in this example, whereupon a row select command 309 is communicated from memory controller 201 to memory module 202 over address/control channel 303 during clock cycle 5. Then, during clock cycle 6, a column access command 310 for the second memory access operation is communicated from memory controller 201 to memory module 202 over address/control channel 303. After some delay, data channel 304 carries the data “burst” for the second memory access operation. For instance, beginning in the high phase of clock cycle 13 and ending in the low phase of clock cycle 17, data burst 311 carries the data for the second memory access operation. As with the data burst 308 discussed above for the first memory access operation, data burst 311 typically has a length of 8 blocks (labeled 0/1/0-0/1/7) that are each an 8-byte block of data, thus resulting in burst 311 containing 64 bytes of data for the third memory access operation (read or write to/from the specified address).
As the example of
As also illustrated in
In certain implementations, a plurality of DIMMs may share an address/control channel, and each DIMM may provide a separate data channel, wherein tiling may be employed on the address/control channel to maintain high bandwidth utilization on both data channels of the DIMMs. However, in these implementations, each DIMM provides only a single data channel.
As is well-known in the art, memory is often arranged into independently controllable arrays, often referred to as “memory banks.” Under the control of a memory controller, a bank can generally operate on one transaction at a time. As mentioned above, the memory may be implemented by dynamic storage technology (such as “DRAMS”), or of static RAM technology. In a typical DRAM chip, some number (e.g., 4, 8, and possibly 16) of banks of memory may be present. A memory interleaving scheme may be desired to minimize one of the banks of memory from being a “hot spot” of the memory.
In most systems, memory 101 may hold both programs and data. Each has unique characteristics pertinent to memory performance. For example, when a program is being executed, memory traffic is typically characterized as a series of sequential reads. On the other hand, when a data structure is being accessed, memory traffic is usually characterized by a stride, i.e., the difference in address from a previous access. A stride may be random or fixed. For example, repeatedly accessing a data element in an array may result in a fixed stride of two. As is well-known in the art, a lot of algorithms have a power of 2 stride. This power of 2 stride gives rise to an increase in occurrences of bank conflicts because the power of 2 stride ends up accessing the same bank repeatedly. Accordingly, without some memory interleave management scheme being employed, hot spots may be encountered within the memory in which a common portion of memory (e.g., a given bank of memory) is accessed much more often than other portions of memory.
As discussed above, many compute devices, such as the Intel x86 or AMD x86 microprocessors, are cache-block oriented. Today, a cache block of 64 bytes in size is typical, but compute devices may be implemented with other cache block sizes. A cache block is typically contained all on a single hardware memory storage element, such as a single dual in-line memory module (DIMM). As discussed above, when the cache-block oriented compute device accesses that DIMM, it presents one address and is returned the entire cache-block (e.g., 64 bytes), as in the exemplary data bursts 308 and 311 discussed above with
Some compute devices, such as certain accelerator compute devices, may not be cache-block oriented. That is, those non-cache-block oriented compute devices may access portions of memory (e.g., words) on a much smaller, finer granularity than is accessed by the cache-block oriented compute devices. For instance, while a typical cache-block oriented compute device may access a cache block of 64 bytes for a single memory access request, a non-cache-block oriented compute device may desire to access a Word that is 8 bytes in size in a single memory access request. That is, the non-cache-block oriented compute device in this example may desire to access a particular memory DIMM and only obtain 8 bytes from a particular address present in the DIMM.
As discussed above, traditional multi-processor systems have employed homogeneous compute devices (e.g., processor cores 104A and 104B of
U.S. Patent Application Publication No. 2007/0266206 to Kim et al. (hereinafter “Kim”) proposes a scatter-gather intelligent memory architecture. Kim mentions that to avoid wasting memory bandwidth, the scatter/gather engine supports both cache line size data accesses and smaller, sub-cache line accesses. However, Kim does not appear to describe its memory architecture in detail. One of ordinary skill in the art would thus suppose that Kim may be employing the above-mentioned traditional DIMMs, which enable either a full cache line (e.g., 64 bytes) or a sub-cache line (e.g., 32 bytes) access. However, as with the traditional DIMMs, only a single data channel per DIMM appears to be supported. Kim does not appear to provide any disclosure of a DIMM architecture that provides more than a single data channel per DIMM.
The present invention is directed generally to systems and methods which provide a memory module having multiple data channels that are independently accessible (i.e., a multi-data channel memory module). According to one embodiment, the multi-data channel memory module enables a plurality of independent sub-cache-block accesses to be serviced simultaneously. In addition, the memory architecture also supports cache-block accesses. For instance, multiple ones of the data channels may be employed for servicing a cache-block access. In certain embodiments, the memory module is a scatter/gather dual in-line memory module (DIMM).
Thus, in one embodiment a DIMM architecture that comprises multiple data channels is provided. Each data channel supports a sub-cache-block access, and multiple ones of the data channels may be used for supporting a cache-block access. The plurality of data channels to a given DIMM may be used simultaneously to support different, independent operations (or access requests).
According to one exemplary embodiment, a memory module (e.g., DIMM) comprises eight 8-byte data access channels. Thus, eight 8-byte accesses can be performed in parallel on the given memory module. As an example, a first of the access channels may be performing a read access of a sub-cache-block of data, while another of the access channels may be simultaneously performing a write access of a sub-cache-block of data.
Thus, instead of having a single 64-byte access bus (or data channel) for the memory module, as with traditional DIMMs, in certain embodiments the access bus (or data channel) is partitioned into 8 independent 8-byte sub-buses (which may also be referred to as channels, paths, or lanes). An address and a request type is independently supported for each of the 8-byte sub-buses individually. Accordingly, in certain embodiments, one may think of the traditional DIMM data channel as being divided into multiple sub-buses, which may be referred to as data paths or lanes. Of course, because each of these sub-buses are independently accessible (e.g., for supporting independent memory access operations), they are similar to separate data channels, rather than being smaller portions (e.g., “lanes”) of a larger overall data channel. As such, the sub-buses may be referred to herein as separate data channels, data lanes, or data paths, and each of these terms is intended to have the same meaning, effectively providing for multiple, independently accessible data channels (which may each support a sub-cache-block access of data) for a memory module.
As discussed further hereafter, the 8 independent sub-buses may be used to simultaneously support different sub-cache-block accesses. Additionally, multiple ones of the independent sub-buses may be employed to satisfy a cache-block access. For instance, the eight 8-byte sub-buses may be used to satisfy a full 64-byte cache-block access. As further discussed hereafter, in certain embodiments the cache-block and sub-cache-block accesses may be intermingled such that all eight of the 8-byte data channels need not be reserved for simultaneous use in satisfying a cache-block access. Rather, in certain embodiments, the cache-block access may be satisfied by the channels within a window of time, wherein logic (e.g., a memory controller) may receive the cache-block data within the window of time and bundle the received data into a cache-block of data for satisfying a cache-block access request.
According to one embodiment, the traditional 64-byte data channel of a DIMM (such as the exemplary data channel 205 discussed above with
In one embodiment, when a sub-cache-block access (e.g., a single word) is requested, the address of the sub-cache-block to be accessed is supplied to one of the eight sub-buses (or data lanes) with a corresponding request type (e.g., read or write), and that sub-bus provides the sub-cache-block of data. The other seven 8-byte sub-buses can each independently be supporting other operations. On the other hand, when a cache block access (e.g., of 64-bytes) is requested, the same address and request type (e.g., either a read or write) may be supplied to all eight sub-buses. The eight sub-buses each returns their respective portion of the requested cache block so that the entire cache block is returned in a single burst by the eight sub-buses.
In certain embodiments, upon receiving a cache-block access request, the eight sub-buses may be reserved (to place any sub-cache-block access requests received thereafter “on hold” until the eight sub-buses are used for satisfying the cache-block access request), and the eight sub-buses may then be used simultaneously to fully, in one burst, satisfy the 64-byte cache-block access request. As discussed further hereafter, in other embodiments, no such reservation is employed, but instead the cache-block access request may be handled by the eight sub-buses along with an intermingling of any sub-cache-block access requests that might be present at that time, wherein the cache-block access may be satisfied by the sub-buses within a window of time, and the 64 bytes of the cache-block access returned by the sub-buses within the window of time may be bundled by logic (e.g., a memory controller) into the requested 64 byte cache block of data. Thus, rather than supplying the same address and request type (e.g., either a read or write) to all eight sub-buses simultaneously for satisfying a cache-block access request, in certain embodiments, such address and request type for the cache-block access may in a first instance be supplied to a portion of the eight sub-buses (which each returns their respective portion of the requested cache block) and in a later instance a further portion of the eight sub-buses may be supplied the address and request type in order to return the remaining portion of the requested cache block. The two portions of the cache block may then bundled together (e.g., by a memory controller) to form the requested cache block of data. In other words, rather than satisfying a cache-block access in a single burst of data, in certain embodiments portions of the cache-block of data may be returned over a plurality of bursts (e.g., with sub-cache-block bursts of data intermingled therewith), and the appropriate portions may be bundled together to form a congruent burst of cache-block data.
Thus, in certain embodiments, cache-block (e.g., 64-byte) accesses may be intermixed with sub-cache-block (e.g., 8-byte) accesses, and each 8-byte sub-bus (or “lane”) of the memory module is scheduled independently to support the intermixing. Thus, a cache-block access may not necessarily be performed using all eight sub-buses simultaneously (such that the entire cache-block is returned in a single burst in the manner mentioned above), but instead, at a given time some of the eight 8-byte sub-buses may be used for performing a sub-cache-block access while some others of the eight 8-byte sub-buses are used for the cache-block access. Thus, the cache-block access may be returned within a window of time by the sub-buses, wherein a controller bundles the returned data into the requested cache-block.
In one embodiment, the memory module comprises control logic, such as a Field-Programmable Gate Array (FPGA), that manages decoding and multiplexing of address and control information for the plurality of data channels of the module. For instance, in certain embodiments, address and control information for memory access operations is communicated from a memory controller to the memory module via an external address/control channel. In certain embodiments, the address and control information is encoded according to a time multiplexed encoding scheme to enable address and control information for a plurality of independent memory access operations to be received over a communication time period (e.g., over two time units) in which address and control information for a single memory access operation is traditionally communicated. For instance, during the communication time period that is traditionally performed on an address/control channel for specifying the address and control information for a 64-byte memory access operation (e.g., read or write), the encoded address/control channel of certain embodiments carries information specifying the address and control information for a plurality of independent sub-cache-block data access operations (e.g., eight 8-byte data access operations).
The control logic receives the encoded address and control information and decodes that information to control the plurality of data channels for servicing the plurality of memory access operations specified in the received encoded address and control information. In certain embodiments, a plurality of internal address/control channels is employed within the memory module, which are used for controlling the plurality of data channels for servicing a plurality of independent memory access operations, as discussed further herein.
According to certain embodiments of the present invention, rather than servicing a single memory access operation over a traditional single memory access time period (e.g., an 8 time unit burst), multiple data channels are employed in a memory module (e.g., DIMM) to service a plurality of independent memory access operations over the same access time period. For instance, rather than carrying 64-bytes of data for a single memory access operation over an 8 time unit burst (e.g., 8 clock units or 8 clock phases), an embodiment of the multi-data channel memory module disclosed herein carries 8-bytes of data for each of a plurality of independent memory access operations over such an 8 time unit burst. Thus, according to one embodiment, over an access time period for carrying a cache-block of data (e.g., an 8 time unit burst of 64-bytes of data), the multi-data channel memory module carries a sub-cache-block of data for each of a plurality of independent memory access operations (e.g., carries 8-bytes of data for each of eight independent memory access operations).
Some computing systems are being developed that include heterogeneous compute elements that share a common physical and/or virtual address space of memory. As an example, a system may comprise one or more compute elements that are cache-block oriented, and the system may further comprise one or more compute elements that are non-cache-block oriented. For instance, the cache-block oriented compute element(s) may access main memory in cache blocks of, say, 64 bytes per request, whereas the non-cache-block oriented compute element(s) may access main memory via smaller-sized requests (which may be referred to as “sub-cache-block” requests), such as 8 bytes per request.
One exemplary heterogeneous computing system that may include one or more cache-block oriented compute elements and one or more non-cache-block oriented compute elements is that disclosed in co-pending U.S. patent application Ser. No. 11/841,406 (Attorney Docket No. 73225/P001US/10709871) filed Aug. 20, 2007 titled “MULTI-PROCESSOR SYSTEM HAVING AT LEAST ONE PROCESSOR THAT COMPRISES A DYNAMICALLY RECONFIGURABLE INSTRUCTION SET”, the disclosure of which is incorporated herein by reference. For instance, in such a heterogeneous computing system, one or more processors may be cache-block oriented, while one or more other processors (e.g., the processor described as comprising a dynamically reconfigurable instruction set) may be non-cache-block oriented, and the heterogeneous processors share access to the common main memory (and share a common physical and virtual address space of the memory).
Accordingly, a desire has arisen for an efficient memory architecture for supporting differently sized memory access requests, such as the above-mentioned cache-block accesses and sub-cache-block accesses. Such an improved memory architecture is desired, for example, for use in computing systems that may include one or more cache-block oriented compute elements and one or more non-cache-block oriented compute elements. While the exemplary heterogeneous computing system disclosed in U.S. patent application Ser. No. 11/841,406 (Attorney Docket No. 73225/P001US/10709871) filed Aug. 20, 2007 titled “MULTI-PROCESSOR SYSTEM HAVING AT LEAST ONE PROCESSOR THAT COMPRISES A DYNAMICALLY RECONFIGURABLE INSTRUCTION SET” is one example of a system for which an improved memory architecture may be desired, embodiments of the improved multi-data channel memory module architecture described herein are not limited for use with that heterogeneous computing system, but may likewise be applied to various other types of heterogeneous computing systems in which cache-block oriented and non-cache-block oriented compute elements (e.g., processors) share access to a common memory. In addition, embodiments may likewise be used within homogeneous computing systems.
The foregoing has outlined rather broadly the features and technical advantages of the present invention in order that the detailed description of the invention that follows may be better understood. Additional features and advantages of the invention will be described hereinafter which form the subject of the claims of the invention. It should be appreciated by those skilled in the art that the conception and specific embodiment disclosed may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present invention. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the spirit and scope of the invention as set forth in the appended claims. The novel features which are believed to be characteristic of the invention, both as to its organization and method of operation, together with further objects and advantages will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the present invention.
For a more complete understanding of the present invention, reference is now made to the following descriptions taken in conjunction with the accompanying drawing, in which:
Turning to
The combination of elements 41-43 permit programs to be executed, i.e. instructions are executed in compute element(s) 41 to process data stored in memory 402 of memory module 403. Compute element(s) 41 may be processors (e.g., processor cores) or other functional units. Compute element(s) 41 may comprise a plurality of compute elements, such as processor cores 104A and 104B of
Compute element(s) 41 request access to memory module 43 via bus 44. Memory controller 42 may receive such request and control assignment of the request to an appropriate portion of memory, such as to one of a plurality of memory modules 43 that may be implemented (only one memory module is illustrated for ease of discussion in
In the exemplary embodiment of
In this exemplary embodiment, memory module 43 comprises control logic (e.g., an FPGA, ASIC, etc.) 401, as well as memory (data storage) 402. The memory 402 may be implemented by one or more memories (shown as Memory 0-Memory N), such as DRAMs (Dynamic Random Access Memory), for example, as is commonly employed in DIMMs. In one embodiment, memory module (e.g., DIMM) 43 comprises eight independent data channels, wherein each of the eight independent data channels supports a sub-cache data access. For instance, in one embodiment, each of the eight independent data channels supports a 8-byte burst of data for a corresponding memory access operation. As an example, each of the data channels may be implemented as 1-byte in width and employed for each memory access for supplying a data burst of length 8 (8 time units, such as 8 clock cycles or 8 phases of a clock), thus resulting in an 8-byte burst of data.
For instance, in one embodiment, the 64-bit wide data path of a traditional DIMM is partitioned into eight 8-bit wide paths (i.e., data channels 0-N of
Further, independent memory access operations may be supported in parallel on the different data channels 0-N of
Thus, in the exemplary implementation of
However, in the exemplary embodiment of
In the example of
In general, DRAM accesses include a sequence of operations presented to the DRAM via the collection of signals/commands on the address/control channel. These signals typically include Address/AP, Bank Address, CMD (RAS, CAS and WE), Adr/Cmd Parity, ODT and CS. A typical access sequence includes a bank activate (row select) command followed by a column read or column write command. Successive commands to the same row typically only require a column read or column write command. Before accessing another row on the same bank (or within a defined time limit), the row typically must be closed and precharged using the precharge command. If a single access to a row is anticipated, the precharge may be combined with the column access command by issuing a read or write with the auto-precharge bit set. Several of the signals are redundant or partially used in one DRAM command or the other. For example, the Bank Address bits are the same in both row and column operations and the column address does not use all of the address bits.
Standard DIMMs export the above-mentioned DRAM signals to the DIMM interface to the memory controller. The memory controller is responsible for issuing the row select (or bank activate) and column access commands with the correct sequence and timing along with the necessary precharge operations.
According to one embodiment, the typical row select and column access commands sent to the DRAM are combined into a single command sent from the memory controller to the DIMM. Further, according to one embodiment, this is achieved using the same total number of address and control pins as on the standard DIMM, but the address and control pins are redefined to carry the encoded address/control information. The resulting address sent to the DIMM includes both the row and column addresses in a single 27-bit field.
In one embodiment, some simplifications are enforced on the memory controller's use of commands to allow the DIMM control logic to infer the correct sequencing of DRAM operations from the encoded DIMM commands, using fewer total command bits. For example, in one embodiment, a row is never left open, which implies that the DIMM control logic drives the auto-precharge bit on every column access command. While this precludes accessing a second column address on an open row, the type of non-sequential access patterns for which one embodiment of the DIMM is optimizing makes it unlikely that a subsequent access to a DRAM bank will be to the same row. An advantage gained from doing this is that no more than one DIMM command cycle is ever needed to tell the DIMM control logic what sequence of operations to perform. Also, the precharge bit is not required to be sent from the memory controller to the DIMM. The commands sent to the DIMM in one embodiment indicate Read, Write, Refresh, Precharge and Mode Register Select. Row activation is inferred from a read or write command.
In one embodiment, the time between row select and column access commands is controlled by the DIMM control logic, rather than the memory controller. This allows control of the ODT signals to be moved from the memory controller into the DIMM control logic, saving these 2 signals on the DIMM interface. In addition, multiple ranks can be supported using fewer control bits by encoding the chip select and clock enable signals as well, using 3 bits to carry the information normally carried by 4 chip select and 2 CKE signals.
Examples of column write and read operations for both a standard DIMM and one exemplary implementation of the multi-data-channel DIMM are shown below for a 256 Mb×8 DDR2 DRAM.
In one exemplary implementation of the multi-data-channel DIMM, additional DIMM ACTL signals are obtained from a combination of unused strobe and DM signals, reserved and NC pins on the JEDEC DIMM definition. The unused strobe and DM signals are a result of the way the data and check (ECC) bits are allocated into 8 groups of 8-bit data+check bits instead of 9 groups of 8 bits, each group having strobe and DM bits assigned to it. There are multiple ways the standard DIMM pins could be partitioned to accomplish the same results.
Additionally, dual data rate (DDR) signaling is employed, in this example, to provide another factor of two bandwidth increase. Thus, this results in four times the address control bandwidth on channel 500 as compared to a standard DIMM address/control channel 204 (according to the JEDEC standard). Tiling provides an additional factor of two to allow the single address/control channel 500 to keep up with eight data channels. An exemplary tiling scheme that may be employed is discussed further hereafter with
The single address/control channel 500, in
A typical DIMM has a single data channel that is 8 bytes wide of data and 1 byte wide of error correction code (ECC), and each memory access reads out a burst of 8 words to result in the data channel carrying 64 bytes of data plus 8 bytes of ECC for a given memory access operation. The exemplary implementation of
Thus, rather than servicing a single memory access operation over a traditional single memory access time period (e.g., an 8 time unit burst), multiple data channels are employed in embodiments of the present invention to service a plurality of independent memory access operations over the same access time period. For instance, rather than carrying 64-bytes of data for a single memory access operation over an 8 time unit burst (e.g., 8 clock units or 8 clock phases), an embodiment of the multi-data channel memory module disclosed herein carries 8-bytes of data for each of a plurality of independent memory access operations over such an 8 time unit burst. Thus, according to one embodiment, over an access time period for carrying a cache-block of data (e.g., an 8 time unit burst of 64-bytes of data), the multi-data channel memory module carries a sub-cache-block of data for each of a plurality of independent memory access operations (e.g., carries 8-bytes of data for each of eight independent memory access operations).
Turning to
In this implementation, data channels 5050-5057 are each implemented with one DRAM for providing a bit of ECC and one DRAM for providing 8 bits of data. For instance, data channel 5050 is formed by a first DRAM 601A that provides a bit of ECC and a second DRAM 601B that provides 8 bits of data (I/O 7-4 and I/O 3-0). Data channels 5051-5057 are similarly formed by first DRAMs 602A-608A that each provides a bit of ECC and second DRAMs 602B-608B that each provides 8 bits of data, as shown. The DRAMs thus provide eight, independent data channels 6100-6107, which correspond to data channels 0-N in the example of
In the example of
The exemplary embodiment of DIMM 600 in
As discussed above, to improve data channel bandwidth, tiling may be employed.
Also, in this example, four internal DRAM address/control channels are shown as channels 704, 707, 710, and 713, which correspond to the internal address/control channels 501-504 of
As discussed in the examples of
Also, in
In the illustrated example of
After a predefined delay (the DRAM's data access delay), data channel 705 carries the data “burst” for the first memory access operation. For instance, beginning in the high phase of clock cycle 9 and ending in the low phase of clock cycle 13, data burst 722 carries the data for the first memory access operation. In this exemplary implementation, data burst 722 carries 8-bytes of data for the first memory access operation. For instance, data channel 705 is implemented as an 8-bit (1-byte) wide channel, wherein each memory access comprises a “burst” length of 8 time units (e.g., clock phases), thus resulting in the data channel carrying 8 bytes of data for each access. For instance, each of the 8 blocks of burst 722 (labeled 0/0/0-0/0/7) may be a 1-byte block of data, thus resulting in burst 722 containing 8 bytes of data for the first memory access operation (read or write to/from the specified address).
Continuing with the illustrated example of
After a predefined delay (the DRAM's data access delay), data channel 711 carries the data “burst” for the second memory access operation. For instance, beginning in the high phase of clock cycle 10 and ending in the low phase of clock cycle 14, data burst 733 carries the data for the second memory access operation. In this exemplary implementation, data burst 733 carries 8-bytes of data for the second memory access operation. For instance, data channel 711 is implemented as an 8-bit (1-byte) wide channel, wherein each memory access comprises a “burst” length of 8 time units (e.g., clock phases), thus resulting in the data channel carrying 8 bytes of data for each access.
Continuing further with the illustrated example of
In the high phase of clock cycle 1, encoded address/control command 719 is received by control logic 401 (of
In the low phase of clock cycle 2, encoded address/control command 750 is received by control logic 401 (of
In the high phase of clock cycle 2, encoded address/control command 751 is received by control logic 401 (of
In the low phase of clock cycle 3, encoded address/control command 752 is received by control logic 401 (of
In the high phase of clock cycle 3, encoded address/control command 753 is received by control logic 401 (of
Operation may continue in a similar manner, as illustrated in
Thus, in the above example of
It should be recognized that embodiments of the multi-data channel memory module may, in some implementations, be employed across multiple DRAM ranks. For instance, as is well known in the art, a single address/control channel, such as address/control channel 500 of
In exemplary system 80, a processing subsystem 81 and a memory subsystem 83 are provided. In this exemplary embodiment, processing subsystem 81 comprises compute elements 21A and 21B. Compute element 21A is cache-block oriented and issues to a memory interleave system a physical address for a cache-block memory access request, while compute element 21B is sub-cache-block oriented and issues to the memory interleave system a virtual address for a sub-cache-block access request. As discussed hereafter, in this example, the memory interleave system comprises a host interface 802 that receives requests issued by compute element 21A, and the memory interleave system comprises a memory interface 803 that receives requests issued by heterogeneous compute element 21B.
In this exemplary implementation, the storage elements associated with each memory controller 220-22N comprise a pair of DIMMs. For instance, a first pair of DIMMs 8050-8051 is associated with memory controller 220, a second pair of DIMMs 8060-8061 is associated with memory controller 221, and a third pair of DIMMs 8070-8071 is associated with memory controller 22N. In one embodiment, there are 8 memory controllers implemented, but a different number may be implemented in other embodiments. The DIMMs may each comprise a multi-data channel memory module, such as the exemplary embodiments described above with
Further details regarding exemplary system 80, including a memory interleaving scheme that may be employed therein, are described in concurrently-filed U.S. patent application Ser. No. 12/186,344 entitled “MEMORY INTERLEAVE FOR HETEROGENEOUS COMPUTING,” the disclosure of which is incorporated herein by reference. While system 80 provides one example of a system in which multi-data channel memory modules may be implemented, embodiments of the multi-data channel memory modules disclosed herein are not limited in application to this exemplary system 80, but may likewise be employed in any other system in which such multi-data channel memory modules may be desired.
In certain embodiments, the multi-data channel memory module may be utilized for supporting cache-block memory accesses, as well as supporting sub-cache-block data accesses. In certain embodiments, upon receiving a cache-block access request, the eight data channels 5050-5057 (of
In other embodiments, no such reservation is employed for cache-block access requests, but instead the cache-block access request may be handled by the eight data channels 5050-5057 (of
Thus, in certain embodiments, cache-block (e.g., 64-byte) accesses may be intermixed with sub-cache-block (e.g., 8-byte) accesses, and each 8-byte data channel 5050-5057 (of
In certain embodiments, the multi-data channel memory module may be configurable into either of at least two modes of operation. For instance, in one embodiment, the multi-data channel memory module may be statically or dynamically configurable (e.g., through programming of FPGA 401A of
Although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure of the present invention, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present invention. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.
This application is a divisional of U.S. patent application No. 16/038,571 filed Jul. 18, 2018, issued as U.S. Pat. No. 10,949,347 on Mar. 16, 2021, which application is a continuation of U.S patent application No. 15/806,217 filed Nov. 7, 2017, issued as U.S. Pat. No. 10,061,699 on Aug. 28, 2018, which is a continuation of U.S. patent application No. 15/254,975 filed Sep. 1, 2016, issued as U.S. Pat. No. 9,824,010 on Nov. 21, 2017, which is a continuation of U.S. patent application No. 14/673,732, filed Mar. 30, 2015, issued as U.S. Pat. No. 9,449,659, on Sep. 20, 2016, which is a continuation of U.S. patent application No. 12/186,372, filed Aug. 5, 2008, issued as U.S. Pat. No. 9,015,399 on Apr. 21, 2015. The present application also relates to the following commonly-assigned U.S. patent applications: 1) U.S. patent application Ser. No. 11/841,406, filed Aug. 20, 2007 titled “MULTI-PROCESSOR SYSTEM HAVING AT LEAST ONE PROCESSOR THAT COMPRISES A DYNAMICALLY RECONFIGURABLE INSTRUCTION SET”, issued as U.S. Pat. No. 8,156,307 on Apr. 10, 2012, 2) U.S. patent application Ser. No. 11/854,432, filed Sep. 12, 2007 titled “DISPATCH MECHANISM FOR DISPATCHING INSTRUCTIONS FROM A HOST PROCESSOR TO A CO-PROCESSOR”, issuing as U.S. Pat. No. 8,122,229 on Feb. 21, 2012, 3) U.S. patent application Ser. No. 11/847,169, filed Aug. 29, 2007 titled “COMPILER FOR GENERATING AN EXECUTABLE COMPRISING INSTRUCTIONS FOR A PLURALITY OF DIFFERENT INSTRUCTION SETS”, issued as U.S. Pat. No. 8,561,037 on Oct. 15, 2013, 4) U.S. patent application Ser. No. 11/969,792, filed Jan. 4, 2008 titled “MICROPROCESSOR ARCHITECTURE HAVING ALTERNATIVE MEMORY ACCESS PATHS”, issued as U.S. Pat. No. 9,710,384 on Jul. 18, 2017, and 5) U.S. patent application Ser. No. 12/186,344, filed Aug. 5, 2008 titled “MEMORY INTERLEAVE FOR HETEROGENEOUS COMPUTING”, issued as U.S. Pat. No. 8,095,735 on Jan. 10, 2012.
Number | Name | Date | Kind |
---|---|---|---|
3434114 | Arulpragasam et al. | Mar 1969 | A |
4128880 | Cray, Jr. | Dec 1978 | A |
4386399 | Rasala et al. | May 1983 | A |
4685076 | Yoshida | Aug 1987 | A |
4817140 | Chandra et al. | Mar 1989 | A |
4897783 | Nay | Jan 1990 | A |
5027272 | Samuels | Jun 1991 | A |
5109499 | Inagami et al. | Apr 1992 | A |
5117487 | Nagata | May 1992 | A |
5202969 | Sato et al. | Apr 1993 | A |
5222224 | Flynn et al. | Jun 1993 | A |
5283886 | Nishii et al. | Feb 1994 | A |
5513366 | Agarwal et al. | Apr 1996 | A |
5598546 | Blomgren | Jan 1997 | A |
5664136 | Witt et al. | Sep 1997 | A |
5752035 | Trimberger | May 1998 | A |
5838984 | Nguyen et al. | Nov 1998 | A |
5887182 | Kinoshita | Mar 1999 | A |
5887183 | Agarwal et al. | Mar 1999 | A |
5920721 | Hunter et al. | Jul 1999 | A |
5935204 | Shimizu et al. | Aug 1999 | A |
5937192 | Martin | Aug 1999 | A |
5941938 | Thayer | Aug 1999 | A |
5999734 | Willis et al. | Dec 1999 | A |
6006319 | Takahashi et al. | Dec 1999 | A |
6023755 | Casselman | Feb 2000 | A |
6075546 | Hussain et al. | Jun 2000 | A |
6076139 | Welker et al. | Jun 2000 | A |
6076152 | Huppenthal et al. | Jun 2000 | A |
6097402 | Case et al. | Aug 2000 | A |
6125421 | Roy | Sep 2000 | A |
6154419 | Shakkarwar | Nov 2000 | A |
6170001 | Hinds et al. | Jan 2001 | B1 |
6175915 | Cashman et al. | Jan 2001 | B1 |
6195676 | Spix et al. | Feb 2001 | B1 |
6202133 | Jeddeloh | Mar 2001 | B1 |
6209067 | Collins et al. | Mar 2001 | B1 |
6240508 | Brown, III et al. | May 2001 | B1 |
6308255 | Gorishek, IV et al. | Oct 2001 | B1 |
6339813 | Smith et al. | Jan 2002 | B1 |
6342892 | Van Hook et al. | Jan 2002 | B1 |
6434687 | Huppenthal | Aug 2002 | B1 |
6473831 | Schade | Oct 2002 | B1 |
6480952 | Gorishek, IV et al. | Nov 2002 | B2 |
6567900 | Kessler | May 2003 | B1 |
6611908 | Lentz et al. | Aug 2003 | B2 |
6665790 | Glossner, III et al. | Dec 2003 | B1 |
6684305 | Deneau | Jan 2004 | B1 |
6701424 | Liao et al. | Mar 2004 | B1 |
6738967 | Radigan | May 2004 | B1 |
6789167 | Naffziger | Sep 2004 | B2 |
6831979 | Callum | Dec 2004 | B2 |
6839828 | Gschwind et al. | Jan 2005 | B2 |
6868472 | Miyake et al. | Mar 2005 | B1 |
6891543 | Wyatt | May 2005 | B2 |
6954845 | Arnold et al. | Oct 2005 | B2 |
6983456 | Poznanovic et al. | Jan 2006 | B2 |
7000211 | Arnold | Feb 2006 | B2 |
7065631 | Weaver | Jun 2006 | B2 |
7120755 | Jamil et al. | Oct 2006 | B2 |
7149867 | Poznanovic et al. | Dec 2006 | B2 |
7167971 | Asaad et al. | Jan 2007 | B2 |
7225324 | Huppenthal et al. | May 2007 | B2 |
7257757 | Chun et al. | Aug 2007 | B2 |
7278122 | Willis | Oct 2007 | B2 |
7328195 | Willis | Feb 2008 | B2 |
7367021 | Ansari et al. | Apr 2008 | B2 |
7376812 | Sanghavi et al. | May 2008 | B1 |
7418571 | Wolrich et al. | Aug 2008 | B2 |
7421565 | Kohn | Sep 2008 | B1 |
7546441 | Ansari et al. | Jun 2009 | B1 |
7577822 | Vorbach | Aug 2009 | B2 |
7643353 | Srinivasan et al. | Jan 2010 | B1 |
8095735 | Brewer et al. | Jan 2012 | B2 |
8122229 | Wallach et al. | Feb 2012 | B2 |
8156307 | Wallach et al. | Apr 2012 | B2 |
8561037 | Wallach et al. | Oct 2013 | B2 |
9015399 | Brewer et al. | Apr 2015 | B2 |
9449659 | Brewer et al. | Sep 2016 | B2 |
9710384 | Wallach et al. | Jul 2017 | B2 |
9824010 | Brewer et al. | Nov 2017 | B2 |
10061699 | Brewer et al. | Aug 2018 | B2 |
10949347 | Brewer et al. | Mar 2021 | B2 |
20010011342 | Pechanek et al. | Aug 2001 | A1 |
20010049816 | Rupp | Dec 2001 | A1 |
20020046324 | Barroso et al. | Apr 2002 | A1 |
20030005424 | Ansari et al. | Jan 2003 | A1 |
20030140222 | Ohmi et al. | Jul 2003 | A1 |
20030226018 | Tardo et al. | Dec 2003 | A1 |
20040003170 | Gibson et al. | Jan 2004 | A1 |
20040107331 | Baxter | Jun 2004 | A1 |
20040117599 | Mittal et al. | Jun 2004 | A1 |
20040193837 | Devaney et al. | Sep 2004 | A1 |
20040193852 | Johnson | Sep 2004 | A1 |
20040194048 | Arnold | Sep 2004 | A1 |
20040215898 | Arimilli et al. | Oct 2004 | A1 |
20040221127 | Ang | Nov 2004 | A1 |
20040236920 | Sheaffer | Nov 2004 | A1 |
20040243984 | Vorbach et al. | Dec 2004 | A1 |
20040250046 | Gonzalez et al. | Dec 2004 | A1 |
20050027970 | Arnold et al. | Feb 2005 | A1 |
20050044539 | Liebenow | Feb 2005 | A1 |
20050108503 | Sandon et al. | May 2005 | A1 |
20050172099 | Lowe | Aug 2005 | A1 |
20050188368 | Kinney | Aug 2005 | A1 |
20050223369 | Chun et al. | Oct 2005 | A1 |
20050262278 | Schmidt | Nov 2005 | A1 |
20060075060 | Clark | Apr 2006 | A1 |
20060149941 | Colavin et al. | Jul 2006 | A1 |
20060259737 | Sachs et al. | Nov 2006 | A1 |
20060288191 | Asaad et al. | Dec 2006 | A1 |
20070005881 | Garney | Jan 2007 | A1 |
20070005932 | Covelli et al. | Jan 2007 | A1 |
20070038843 | Trivedi et al. | Feb 2007 | A1 |
20070106833 | Rankin et al. | May 2007 | A1 |
20070130445 | Lau et al. | Jun 2007 | A1 |
20070153907 | Mehta et al. | Jul 2007 | A1 |
20070157166 | Stevens | Jul 2007 | A1 |
20070186210 | Hussain et al. | Aug 2007 | A1 |
20070226424 | Clark et al. | Sep 2007 | A1 |
20070245097 | Gschwind et al. | Oct 2007 | A1 |
20070283336 | Gschwind et al. | Dec 2007 | A1 |
20070288701 | Hofstee et al. | Dec 2007 | A1 |
20070294666 | Papakipos et al. | Dec 2007 | A1 |
20080059758 | Sachs | Mar 2008 | A1 |
20080059759 | Sachs | Mar 2008 | A1 |
20080059760 | Sachs | Mar 2008 | A1 |
20080104365 | Kohno et al. | May 2008 | A1 |
20080209127 | Brokenshire et al. | Aug 2008 | A1 |
20080215854 | Asaad et al. | Sep 2008 | A1 |
20090172364 | Sprangle et al. | Jul 2009 | A1 |
20090177843 | Wallach et al. | Jul 2009 | A1 |
20090219779 | Mao et al. | Sep 2009 | A1 |
20100002572 | Garrett | Jan 2010 | A1 |
20100138587 | Hutson | Jun 2010 | A1 |
20110055516 | Willis | Mar 2011 | A1 |
20150206561 | Brewer et al. | Jul 2015 | A1 |
20160371185 | Brewer et al. | Dec 2016 | A1 |
20180060234 | Brewer et al. | Mar 2018 | A1 |
20180322054 | Brewer et al. | Nov 2018 | A1 |
Number | Date | Country |
---|---|---|
2008014494 | Jan 2008 | WO |
Entry |
---|
International Search Report & Written Opinion dated Feb. 5, 2009 issued for PCT/US08/87233, 11 pgs. |
International Search Report & Written Opinion dated Nov. 12, 2008 issued for PCT/US08/73423, 12 pgs. |
International Search Report & Written Opinion dated Nov. 18, 2008 issued for PCT/US08/75828, 12 pgs. |
International Search Report & Written Opinion dated Dec. 1, 2009 issued for PCT/US09/60811, 7 pgs. |
International Search Report & Written Opinion dated Dec. 9, 2009 issued for PCT/US09/60820, 8 pgs. |
International Search Report & Written Opinion dated Oct. 26, 2009 issued for PCT/US2009/051096, 9 pgs. |
International Search Report & Written Opinion dated Nov. 14, 2008 issued for PCT/US08/74566, 9 pgs. |
U.S. Appl. No. 16/038,571 titled “Multiple Data Channel Memory Module Architecture” filed Jul. 18, 2018, pp. all. |
U.S. Appl. No. 15/806,217, entitled “Multiple Data Channel Memory Module Architecture” filed Nov. 7, 2017, pp. all. |
“Cray XD1 FPGA Development”, Release 1.2, S-6400-12, issued Apr. 18, 2005. Available at www.eng.uah/edu/˜jacksoa/CrayXD1FPGADevelopment.pdf, 2005, pp. all. |
“Poster entitled “GigaScale Mixed-Signal System Verification””, FTL Systems, Inc. presented at the DARPA/MTO Team/NeoCAD2003 Fall Review, Sep. 15-17, 2003, Monterey, CA, a public unclassified meeting., 2003, pp. ail. |
“Poster entitled StarStream™ GigaScale Mixed-Signal System Verification”, FTL Systems, Inc. presented at the DARPA/MTO Team/NeoCAD Program Review, Feb. 23, 2004, Scottsdale, AZ, Monterey, CA. a public unclassified meeting., 2004, pp. all. |
“StarStream Design Summary”, FTL Systems, Inc., available at Design Automation Conference(DC\C), Jun. 2005, Anaheim, CA., 2005, pp. all. |
“The PC's x86 instruction Set”, The PC Guide, www.pcguide.com/ref/cup.arch/int/instX86-c.htmi, Apr. 2001, 3pgs. |
“XSA Board V1.1, V1.2 User Manual”, Express Corporation (Release Date: Jun. 23, 2005), pp. all. |
“XSA-50 Spartan-2 Prototyping Board with 2.5V, 50,000-gate FPGA”, Express (Copyright 1998-2008), pp. ail. |
Arnold, Jeffrey “The Splash 2 Processor and Applications”, IEEE, Nov. 1993, pp. 482-485. |
Belgard, Rich “Reconfigurable Illogic”, Microprocessor, The insiders Guide to Microprocessor Hardware, May 10, 2004, 4pgs. |
Bhuyan “Lecture 15: Symmetric Multiprocessor: Cache Protocols”, Feb. 28, 2001, 16 IpQS., 2001, pp. all. |
Callahan, Timothy et al., “The Garp Architecture and C Compiler”, IEEE Computer, vol. 33, No. 4, Apr. 2000, 62-69. |
Gerald, Estrin , “Organization of Computer Systems—The Fixed Plus Variable Structure Computer”, May 1960, pp. all. |
Gokhale, Maya “Heterogeneous Processing”, Los Alamos Computer Science Institute LACSI2000o, Oct. 17-19, 2006, Santa Fe, NM. Available at www.cct.lsu.edu/˜estrabd/LACSI2006/workshops/workshop5/gokhaie mccormick.pdf., 2006, pp. ail. |
Gokhale, Maya “Reconfigurable Computing”, Accelerating Computation with Filed-Programmable Gate Arrays, © Springer, Dec. 2005, pp. 60-64. |
Hauck, “The Roles of FPGAs in Reprogrammable Systems”, Proceedings for the IEEE, vol. 86, No. 4, Apr. 1998, pp. 615-638. |
Koch, Andreas et al. “A Universal Co-Processor for Workstations”, Selected paper from the Oxford International Workshop on Field Programmable Logic and Applications, Sep. 1993, 14 pgs. |
Levine, et al. “Efficient Application Representation for HASTE: Hybrid Architectures with a Single, Transformable Executable”, Apr. 2003, 10 pgs. |
Page “Reconfigurable Processor Architectures”, Microprocessors and Microsystems, vol. 20, issue 3 May 1996, 1996, pp. 185-196. |
Shirazi, et al. “Run-Time Management of Dynamically Reconfigurable Designs”, Field-Programmable Logic and Applications from FPGAs to Computing Paradigm, Aug.-Sep. 1998, pp. all. |
Siewiorek, Daniel et al. “Computer Structures: Principles and Examples”, McGraw-Hill, Figure 1(a), 1982, p. 334. |
Tredennick, Nick et al. “Microprocessor Sunset”, Microprocessor, The Insiders Guide to Microprocessor Hardware, May 3, 2004, 4 pgs. |
Vassiliadis, et al. “The ARISE Reconfigurable Instruction Set Extension Framework”, Jul. 16, 2007. |
Number | Date | Country | |
---|---|---|---|
20210182195 A1 | Jun 2021 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16038571 | Jul 2018 | US |
Child | 17191542 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15806217 | Nov 2017 | US |
Child | 16038571 | US | |
Parent | 15254975 | Sep 2016 | US |
Child | 15806217 | US | |
Parent | 14673732 | Mar 2015 | US |
Child | 15254975 | US | |
Parent | 12186372 | Aug 2008 | US |
Child | 14673732 | US |