Methods and apparatus of stacking DRAMs

Information

  • Patent Grant
  • 8619452
  • Patent Number
    8,619,452
  • Date Filed
    Friday, September 1, 2006
    18 years ago
  • Date Issued
    Tuesday, December 31, 2013
    10 years ago
Abstract
Large capacity memory systems are constructed using stacked memory integrated circuits or chips. The stacked memory chips are constructed in such a way that eliminates problems such as signal integrity while still meeting current and future memory standards.
Description
BACKGROUND OF THE INVENTION

1. Field of the Invention


The present invention is directed toward the field of building custom memory systems cost-effectively for a wide range of markets.


2. Art Background


Dynamic Random Access Memory (DRAM) is the most popular type of volatile memory and is widely used in a number of different markets. The popularity of DRAMs is mostly due to their cost-effectiveness (Mb/$). The PC main memory market has traditionally been the largest consumer of DRAMs.


The DRAM interface speed in several important markets is increasing rapidly. For example, the PC market today uses 667 MHz DDR2 SDRAMs. The industry is on track to use 800 MHz DDR2 SDRAMs in 2006. Effort is also underway in developing DDR3 SDRAMs that are expected to have interface speeds ranging from 800 MHz to 1600 MHz.


Signal integrity becomes increasingly challenging as the interface speed increases. At higher speeds, the number of loads on a memory channel must be decreased in order to ensure clean signals. For example, when the PC desktop segment used 133 MHz SDRAMs, three DIMM slots per memory channel (or bus or interface) was the norm when using unbuffered modules. When this market segment adopted DDR SDRAMs and now DDR2 SDRAMs, the number of DIMM slots per memory channel dropped to two. At DDR3 speeds, it is predicted that only one DIMM slot will be possible per memory channel. This obviously places an upper limit on the maximum memory capacity of the system.


Clearly there is a need for an invention that increases the memory capacity of a system in a manner that is both cost-effective and compatible with existing and future standards while solving various technical problems like signal integrity.


SUMMARY OF THE INVENTION

In one embodiment, large capacity memory systems are constructed using stacked memory integrated circuits or chips. The stacked memory chips are constructed in such a way that eliminates problems like signal integrity while still meeting current and future memory standards.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates one embodiment for a FB-DIMM.



FIG. 2A includes the FB-DIMMs of FIG. 1 with annotations to illustrate latencies between a memory controller and two FB-DIMMs.



FIG. 2B illustrates latency in accessing an FB-DIMM with DRAM stacks, where each stack contains two DRAMs.



FIG. 3 is a block diagram illustrating one embodiment of a memory device that includes multiple memory core chips.



FIG. 4 is a block diagram illustrating one embodiment for partitioning a high speed DRAM device into asynchronous memory core chip and an interface chip.



FIG. 5 is a block diagram illustrating one embodiment for partitioning a memory device into a synchronous memory chip and a data interface chip.



FIG. 6 illustrates one embodiment for stacked memory chips.



FIG. 7 is a block diagram illustrating one embodiment for interfacing a memory device to a DDR2 memory bus.



FIG. 8
a is a block diagram illustrating one embodiment for stacking memory chips on a DIMM module.



FIG. 8
b is a block diagram illustrating one embodiment for stacking memory chips with memory sparing.



FIG. 8
c is a block diagram illustrating operation of a working pool of stack memory.



FIG. 8
d is a block diagram illustrating one embodiment for implementing memory sparing for stacked memory chips.



FIG. 8
e is a block diagram illustrating one embodiment for implementing memory sparing on a per stack basis.



FIG. 9
a is a block diagram illustrating memory mirroring in accordance with one embodiment.



FIG. 9
b is a block diagram illustrating one embodiment for a memory device that enables memory mirroring.



FIG. 9
c is a block diagram illustrating one embodiment for a mirrored memory system with stacks of memory.



FIG. 9
d is a block diagram illustrating one embodiment for enabling memory mirroring simultaneously across all stacks of a DIMM.



FIG. 9
e is a block diagram illustrating one embodiment for enabling memory mirroring on a per stack basis.



FIG. 10
a is a block diagram illustrating a stack of memory chips with memory RAID capability during execution of a write operation.



FIG. 10
b is a block diagram illustrating a stack of memory chips with memory RAID capability during a read operation.



FIG. 11 illustrates conventional impedance loading as a result of adding DRAMs to a high-speed memory bus.



FIG. 12 illustrates impedance loading as a result of adding DRAMs to a high-speed memory bus in accordance with one embodiment.



FIG. 13 is a block diagram illustrating one embodiment for adding low-speed memory chips using a socket.



FIG. 14 illustrates a PCB with a socket located on top of a stack.



FIG. 15 illustrates a PCB with a socket located on the opposite side from the stack.



FIG. 16 illustrates an upgrade PCB that contains one or more memory chips.



FIG. 17 is a block diagram illustrating one embodiment for stacking memory chips.



FIG. 18 is a timing diagram for implementing memory RAID using a datamask (“DM”) signal in a three chip stack composed of 8 bit wide DDR2 SDRAMS.





DETAILED DESCRIPTION

The disclosure of U.S. Provisional Patent Application Ser. No. 60/713,815, entitled “Methods and Apparatus of Stacking DRAMs”, filed on Sep. 2, 2005, is hereby expressly incorporated herein by reference.


There are market segments such as servers and workstations that require very large memory capacities. One way to provide large memory capacity is to use Fully Buffered DIMMs (FB-DIMMs), wherein the DRAMs are electrically isolated from the memory channel by an Advanced Memory Buffer (AMB). The FB-DIMM solution is expected to be used in the server and workstation market segments. An AMB acts as a bridge between the memory channel and the DRAMs, and also acts as a repeater. This ensures that the memory channel is always a point-to-point connection. FIG. 1 illustrates one embodiment of a memory channel with FB-DIMMs. FB-DIMMs 100 and 150 include DRAM chips (110 and 160) and AMBs 120 and 170. A high-speed bi-directional link 135 couples a memory controller 130 to FB-DIMM 100. Similarly, FB-DIMM 100 is coupled to FB-DIMM 150 via high-speed bi-directional link 140. Additional FB-DIMMs may be added in a similar manner.


The FB-DIMM solution has some drawbacks, the two main ones being higher cost and higher latency (i.e. lower performance). Each AMB is expected to cost $10-$15 in volume, a substantial additional fraction of the memory module cost. In addition, each AMB introduces a substantial amount of latency (˜5 ns). Therefore, as the memory capacity of the system increases by adding more FB-DIMMs, the performance of the system degrades due to the latencies of successive AMBs.


An alternate method of increasing memory capacity is to stack DRAMs on top of each other. This increases the total memory capacity of the system without adding additional distributed loads (instead, the electrical load is added at almost a single point). In addition, stacking DRAMs on top of each other reduces the performance impact of AMBs since multiple FB-DIMMs may be replaced by a single FB-DIMM that contains stacked DRAMs. FIG. 2A includes the FB-DIMMs of FIG. 1 with annotations to illustrate latencies between a memory controller and two FB-DIMMs. The latency between memory controller 130 and FB-DIMM 100 is the sum of t1 and tc1, wherein t1 is the delay between memory channel interface of the AMB 120 and the DRAM interface of AMB 120 (i.e., the delay through AMB 120 when acting as a bridge), and tc1 is the signal propagation delay between memory controller 130 and FB-DIMM 100. Note that t1 includes the delay of the address/control signals through AMB 120 and optionally that of the data signals through AMB 120. Also, tc1 includes the propagation delay of signals from the memory controller 130 to FB-DIMM 100 and optionally, that of the signals from FB-DIMM 100 to the memory controller 130. As shown in FIG. 2A, the latency between memory controller 130 and FB-DIMM 150 is the sum of t2+t1+tc1+tc2, wherein t2 is the delay between input and output memory channel interfaces of AMB 120 (i.e. when AMB 120 is operating as a repeater) and tc2 is a signal propagation delay between FB-DIMM 100 and FB-DIMM 150. t2 includes the delay of the signals from the memory controller 130 to FB-DIMM 150 through AMB 120, and optionally that of the signals from FB-DIMM 150 to memory controller 130 through AMB 120. Similarly, tc2 represents the propagation delay of signals from FB-DIMM 100 to FB-DIMM 150 and optionally that of signals from FB-DIMM 150 and FB-DIMM 100. t1 represents the delay of the signals through an AMB chip that is operating as a bridge, which in this instance, is AMB 170.



FIG. 2B illustrates latency in accessing an FB-DIMM with DRAM stacks, where each stack contains two DRAMs. In some embodiments, a “stack” comprises at least one DRAM chip. In other embodiments, a “stack” comprises an interface or buffer chip with at least one DRAM chip. FB-DIMM 210 includes three stacks of DRAMs (220, 230 and 240) and AMB 250 accessed by memory controller 200. As shown in FIG. 2B, the latency for accessing the stacks of DRAMs is the sum of t1 and tc1. It can be seen from FIG. 2A and 2B that the latency is less in a memory channel with an FB-DIMM that contains 2-DRAM stacks than in a memory channel with two standard FB-DIMMs (i.e. FB-DIMMs with individual DRAMs). Note that FIG. 2B shows the case of 2 standard FB-DIMMs vs. an FB-DIMM that uses 2-DRAM stacks as an example. However, this may be extended to n standard FB-DIMMs vs. an FB-DIMM that uses n-DRAM stacks.


Stacking high speed DRAMs on top of each other has its own challenges. As high speed DRAMs are stacked, their respective electrical loads or input parasitics (input capacitance, input inductance, etc.) add up, causing signal integrity and electrical loading problems and thus limiting the maximum interface speed at which a stack may operate. In addition, the use of source synchronous strobe signals introduces an added level of complexity when stacking high speed DRAMs.


Stacking low speed DRAMs on top of each other is easier than stacking high speed DRAMs on top of each other. Careful study of a high speed DRAM will show that it consists of a low speed memory core and a high speed interface. So, if we may separate a high speed DRAM into two chips—a low speed memory chip and a high speed interface chip, we may stack multiple low speed memory chips behind a single high speed interface chip. FIG. 3 is a block diagram illustrating one embodiment of a memory device that includes multiple memory core chips. Memory device 320 includes a high speed interface chip 300 and a plurality of low speed memory chips 310 stacked behind high speed interface chip 300. One way of partitioning is to separate a high speed DRAM into a low speed, wide, asynchronous memory core and a high speed interface chip. FIG. 4 is a block diagram illustrating one embodiment for partitioning a high speed DRAM device into asynchronous memory core and an interface chip. Memory device 400 includes asynchronous memory core chip 420 interfaced to a memory channel via interface chip 410. As shown in FIG. 4, interface chip 410 receives address (430), command (440) and data (460) from an external data bus, and uses address (435), command & control (445 and 450) and data (465) over an internal data bus to communicate with asynchronous memory core chip 420.


However, it must be noted that several other partitions are also possible. For example, the address bus of a high speed DRAM typically runs at a lower speed than the data bus. For a DDR400 DDR SDRAM, the address bus runs at a 200 MHz speed while the data bus runs at a 400 MHz speed, whereas for a DDR2-800 DDR2 SDRAM, the address bus runs at a 400 MHz speed while the data bus runs at an 800 MHz speed. High-speed DRAMs use pre-fetching in order to support high data rates. So, a DDR2-800 device runs internally at a rate equivalent to 200 MHz rate except that 4n data bits are accessed from the memory core for each read or write operation, where n is the width of the external data bus. The 4n internal data bits are multiplexed/de-multiplexed onto the n external data pins, which enables the external data pins to run at 4 times the internal data rate of 200 MHz.


Thus another way to partition, for example, a high speed n-bit wide DDR2 SDRAM could be to split it into a slower, 4n-bit wide, synchronous DRAM chip and a high speed data interface chip that does the 4n to n data multiplexing/de-multiplexing. FIG. 5 is a block diagram illustrating one embodiment for partitioning a memory device into a synchronous memory chip and a data interface chip. For this embodiment, memory device 500 includes synchronous memory chip 510 and a data interface chip 520. Synchronous memory chip 510 receives address (530) and command & clock 540 from a memory channel. It also connected with data interface chip 520 through command & control (550) and data 570 over a 4n bit wide internal data bus. Data interface chip 520 connects to an n-bit wide external data bus 545 and a 4n-bit wide internal data bus 570. In one embodiment, an n-bit wide high speed DRAM may be partitioned into an m*n-bit wide synchronous DRAM chip and a high-speed data interface chip that does the m*n-to-n data multiplexing/de-multiplexing, where m is the amount of pre-fetching, m>1, and m is typically an even number.


As explained above, while several different partitions are possible, in some embodiments the partitioning should be done in such a way that:


the host system sees only a single load (per DIMM in the embodiments where the memory devices are on a DIMM) on the high speed signals or pins of the memory channel or bus and the memory chips that are to be stacked on top of each other operate at a speed lower than the data rate of the memory channel or bus (i.e. the rate of the external data bus), such that stacking these chips does not affect the signal integrity.


Based on this, multiple memory chips may be stacked behind a single interface chip that interfaces to some or all of the signals of the memory channel. Note that this means that some or all of the I/O signals of a memory chip connect to the interface chip rather than directly to the memory channel or bus of the host system. The I/O signals from the multiple memory chips may be bussed together to the interface chip or may be connected as individual signals to the interface chip. Similarly, the I/O signals from the multiple memory chips that are to be connected directly to the memory channel or bus of the host system may be bussed together or may be connected as individual signals to the external memory bus. One or more buses may be used when the I/O signals are to be bussed to either the interface chip or the memory channel or bus. Similarly, the power for the memory chips may be supplied by the interface chip or may come directly from the host system.



FIG. 6 illustrates one embodiment for stacked memory chips. Memory chips (620, 630 and 640) include inputs and/or outputs for s1, s2, s3, s4 as well as v1 and v2. The s1 and s2 inputs and/or outputs are coupled to external memory bus 650, and s3 and s4 inputs and/or outputs are coupled to interface chip 610. Memory signals s1 and s4 are examples of signals that are not bussed. Memory signals s2 and s3 are examples of bussed memory signals. Memory power rail v1 is an example of memory power connected directly to external bus 650, whereas v2 is an example of memory power rail connected to interface 610. The memory chips that are to be stacked on top of each other may be stacked as dies or as individually packaged parts. One method is to stack individually packaged parts since these parts may be tested and burnt-in before stacking. In addition, since packaged parts may be stacked on top of each other and soldered together, it is quite easy to repair a stack. To illustrate, if a part in the stack were to fail, the stack may be de-soldered and separated into individual packages, the failed chip may be replaced by a new and functional chip, and the stack may be re-assembled. However, it should be clear that repairing a stack as described above is time consuming and labor intensive.


One way to build an effective p-chip memory stack is to use p+q memory chips and an interface chip, where the q extra memory chips (1≦q≦p, typically) are spare chips, wherein p and q comprise integer values. If one or more of the p memory chips becomes damaged during assembly of the stack, they may be replaced with the spare chips. The post-assembly detection of a failed chip may either be done using a tester or using built-in self test (BIST) logic in the interface chip. The interface chip may also be designed to have the ability to replace a failed chip with a spare chip such that the replacement is transparent to the host system.


This idea may be extended further to run-time (i.e. under normal operating conditions) replacement of memory chips in a stack. Electronic memory chips such as DRAMs are prone to hard and soft memory errors. A hard error is typically caused by broken or defective hardware such that the memory chip consistently returns incorrect results. For example, a cell in the memory array might be stuck low so that it always returns a value of “0” even when a “1” is stored in that cell. Hard errors are caused by silicon defects, bad solder joints, broken connector pins, etc. Hard errors may typically be screened by rigorous testing and burn-in of DRAM chips and memory modules. Soft errors are random, temporary errors that are caused when a disturbance near a memory cell alters the content of the cell. The disturbance is usually caused by cosmic particles impinging on the memory chips. Soft errors may be corrected by overwriting the bad content of the memory cell with the correct data. For DRAMs, soft errors are more prevalent than hard errors.


Computer manufacturers use many techniques to deal with soft errors. The simplest way is to use an error correcting code (ECC), where typically 72 bits are used to store 64 bits of data. This type of code allows the detection and correction of a single-bit error, and the detection of two-bit errors. ECC does not protect against a hard failure of a DRAM chip. Computer manufacturers use a technique called Chipkill or Advanced ECC to protect against this type of chip failure. Disk manufacturers use a technique called Redundant Array of Inexpensive Disks (RAID) to deal with similar disk errors.


More advanced techniques such as memory sparing, memory mirroring, and memory RAID are also available to protect against memory errors and provide higher levels of memory availability. These features are typically found on higher-end servers and require special logic in the memory controller. Memory sparing involves the use of a spare or redundant memory bank that replaces a memory bank that exhibits an unacceptable level of soft errors. A memory bank may be composed of a single DIMM or multiple DIMMs. Note that the memory bank in this discussion about advanced memory protection techniques should not be confused with the internal banks of DRAMs.


In memory mirroring, every block of data is written to system or working memory as well as to the same location in mirrored memory but data is read back only from working memory. If a bank in the working memory exhibits an unacceptable level of errors during read back, the working memory will be replaced by the mirrored memory.


RAID is a well-known set of techniques used by the disk industry to protect against disk errors. Similar RAID techniques may be applied to memory technology to protect against memory errors. Memory RAID is similar in concept to RAID 3 or RAID 4 used in disk technology. In memory RAID a block of data (typically some integer number of cachelines) is written to two or more memory banks while the parity for that block is stored in a dedicated parity bank. If any of the banks were to fail, the block of data may be re-created with the data from the remaining banks and the parity data.


These advanced techniques (memory sparing, memory mirroring, and memory RAID) have up to now been implemented using individual DIMMs or groups of DIMMs. This obviously requires dedicated logic in the memory controller. However, in this disclosure, such features may mostly be implemented within a memory stack and requiring only minimal or no additional support from the memory controller.


A DIMM or FB-DIMM may be built using memory stacks instead of individual DRAMs. For example, a standard FB-DIMM might contain nine, 18, or more DDR2 SDRAM chips. An FB-DIMM may contain nine 18, or more DDR2 stacks, wherein each stack contains a DDR2 SDRAM interface chip and one or more low speed memory chips stacked on top of it (i.e. electrically behind the interface chip—the interface chip is electrically between the memory chips and the external memory bus). Similarly, a standard DDR2 DIMM may contain nine 18 or more DDR2 SDRAM chips. A DDR2 DIMM may instead contain nine 18, or more DDR2 stacks, wherein each stack contains a DDR2 SDRAM interface chip and one or more low speed memory chips stacked on top of it. An example of a DDR2 stack built according to one embodiment is shown in FIG. 7.



FIG. 7 is a block diagram illustrating one embodiment for interfacing a memory device to a DDR2 memory bus. As shown in FIG. 7, memory device 700 comprises memory chips 720 coupled to DDR2 SDRAM interface chip 710. In turn, DDR2 SDRAM interface chip 710 interfaces memory chips 720 to external DDR2 memory bus 730. As described previously, in one embodiment, an effective p-chip memory stack may be built with p+q memory chips and an interface chip, where the q chips may be used as spares, and p and q are integer values. In order to implement memory sparing within the stack, the p+q chips may be separated into two pools of chips: a working pool of p chips and a spare pool of q chips. So, if a chip in the working pool were to fail, it may be replaced by a chip from the spare pool. The replacement of a failed working chip by a spare chip may be triggered, for example, by the detection of a multi-bit failure in a working chip, or when the number of errors in the data read back from a working chip crosses a pre-defined or programmable error threshold.


Since ECC is typically implemented across the entire 64 data bits in the memory channel and optionally, across a plurality of memory channels, the detection of single-bit or multi-bit errors in the data read back is only done by the memory controller (or the AMB in the case of an FB-DIMM). The memory controller (or AMB) may be designed to keep a running count of errors in the data read back from each DIMM. If this running count of errors were to exceed a certain pre-defined or programmed threshold, then the memory controller may communicate to the interface chip to replace the chip in the working pool that is generating the errors with a chip from the spare pool.


For example, consider the case of a DDR2 DIMM. Let us assume that the DIMM contains nine DDR2 stacks (stack 0 through 8, where stack 0 corresponds to the least significant eight data bits of the 72-bit wide memory channel, and stack 8 corresponds to the most significant 8 data bits), and that each DDR2 stack consists of five chips, four of which are assigned to the working pool and the fifth chip is assigned to the spare pool. Let us also assume that the first chip in the working pool corresponds to address range [N−1:0], the second chip in the working pool corresponds to address range [2N−1:N], the third chip in the working pool corresponds to address range [3N−1:2N], and the fourth chip in the working pool corresponds to address range [4N−1:3N], where “N” is an integer value.


Under normal operating conditions, the memory controller may be designed to keep track of the errors in the data from the address ranges [4N−1:3N], [3N−1:2N], [2N−1:N], and [N−1:0]. If, say, the errors in the data in the address range [3N−1:2N] exceeded the pre-defined threshold, then the memory controller may instruct the interface chip in the stack to replace the third chip in the working pool with the spare chip in the stack. This replacement may either be done simultaneously in all the nine stacks in the DIMM or may be done on a per-stack basis. Assume that the errors in the data from the address range [3N−1:2N] are confined to data bits [7:0] from the DIMM. In the former case, the third chip in all the stacks will be replaced by the spare chip in the respective stacks. In the latter case, only the third chip in stack 0 (the LSB stack) will be replaced by the spare chip in that stack. The latter case is more flexible since it compensates for or tolerates one failing chip in each stack (which need not be the same chip in all the stacks), whereas the former case compensates for or tolerates one failing chip over all the stacks in the DIMM. So, in the latter case, for an effective p-chip stack built with p+q memory chips, up to q chips may fail per stack and be replaced with spare chips. The memory controller (or AMB) may trigger the memory sparing operation (i.e. replacing a failing working chip with a spare chip) by communicating with the interface chips either through in-band signaling or through sideband signaling. A System Management Bus (SMBus) is an example of sideband signaling.


Embodiments for memory sparing within a memory stack configured in accordance with some embodiments are shown in FIGS. 8a-8e.



FIG. 8
a is a block diagram illustrating one embodiment for stacking memory chips on a DIMM module. For this example, memory module 800 includes nine stacks (810, 820, 830, 840, 850, 860, 870, 880 and 890). Each stack comprises at least two memory chips. In one embodiment, memory module 800 is configured to work in accordance with DDR2 specifications.



FIG. 8
b is a block diagram illustrating one embodiment for stacking memory chips with memory sparing. For the example memory stack shown in FIG. 8b, memory device 875 includes memory chips (885, 886, 888 and 892) stacked to form the working memory pool. For this embodiment, to access the working memory pool, the memory chips are each assigned a range of addresses as shown in FIG. 8b. Memory device 875 also includes spare memory chip 895 that forms the spare memory pool. However, the spare memory pool may comprise any number of memory chips.



FIG. 8
c is a block diagram illustrating operation of a working memory pool. For this embodiment, memory module 812 includes a plurality of integrated circuit memory stacks (814, 815, 816, 817, 818, 819, 821, 822 and 823). For this example, each stack contains a working memory pool 825 and a spare memory chip 855.



FIG. 8
d is a block diagram illustrating one embodiment for implementing memory sparing for stacked memory chips. For this example, memory module 824 also includes a plurality of integrated circuit memory stacks (826, 827, 828, 829, 831, 832, 833, 834 and 835). For this embodiment, memory sparing may be enabled if data errors occur in one or more memory chips (i.e., occur in an address range). For the example illustrated in FIG. 8d, data errors exceeding a predetermined threshold have occurred in DQ[7:0] in the address range [3N−1:2N]. To implement memory sparing, the failing chip is replaced simultaneously in all of the stacks of the DIMM. Specifically, for this example, failing chip 857 is replaced by spare chip 855 in all memory stacks of the DIMM.



FIG. 8
e is a block diagram illustrating one embodiment for implementing memory sparing on a per stack basis. For this embodiment, memory module 836 also includes a plurality of integrated circuit memory stacks (837, 838, 839, 841, 842, 843, 844, 846 and 847). Each stack is apportioned into the working memory pool and a spare memory pool (e.g., spare chip 861). For this example, memory chip chip 863 failed in stack 847. To enable memory sparing, only the spare chip in stack 847 replaces the failing chip, and all other stacks continue to operate using the working pool.


Memory mirroring can be implemented by dividing the p+q chips in each stack into two equally sized sections—the working section and the mirrored section. Each data that is written to memory by the memory controller is stored in the same location in the working section and in the mirrored section. When data is read from the memory by the memory controller, the interface chip reads only the appropriate location in the working section and returns the data to the memory controller. If the memory controller detects that the data returned had a multi-bit error, for example, or if the cumulative errors in the read data exceeded a pre-defined or programmed threshold, the memory controller can be designed to tell the interface chip (by means of in-band or sideband signaling) to stop using the working section and instead treat the mirrored section as the working section. As discussed for the case of memory sparing, this replacement can either be done across all the stacks in the DIMM or can be done on a per-stack basis. The latter case is more flexible since it can compensate for or tolerate one failing chip in each stack whereas the former case can compensate for or tolerate one failing chip over all the stacks in the DIMM.


Embodiments for memory mirroring within a memory stack are shown in FIGS. 9a-9e.



FIG. 9
a is a block diagram illustrating memory mirroring in accordance with one embodiment. As shown in FIG. 9a, a memory device 900 includes interface chip 910 that interfaces memory to an external memory bus. The memory is apportioned into a working memory section 920 and a mirrored memory section 930. During normal operation, write operations occur in both the working memory section 920 and the mirrored memory section 930. However, read operations are only conducted from the working memory section 920.



FIG. 9
b is a block diagram illustrating one embodiment for a memory device that enables memory mirroring. For this example, memory device 900 uses mirrored memory section 930 as working memory due to a threshold of errors that occurred in the working memory 920. As such, working memory section 920 is labeled as the unusable working memory section. In operation, interface chip 910 executes write operations to mirrored memory section 930 and optionally to the unusable working memory section 920. However, with memory mirroring enabled, reads occur from mirrored memory section 930.



FIG. 9
c is a block diagram illustrating one embodiment for a mirrored memory system with integrated circuit memory stacks. For this embodiment, memory module 915 includes a plurality of integrated circuit memory stacks (902, 903, 904, 905, 906, 907, 908, 909 and 912). As shown in FIG. 9c, each stack is apportioned into a working memory section 953, and labeled “W” in FIG. 9c, as well as a mirrored memory section 951, labeled “M” in FIG. 9c. For this example, the working memory section is accessed (i.e., mirrored memory is not enabled).



FIG. 9
d is a block diagram illustrating one embodiment for enabling memory mirroring simultaneously across all stacks of a DIMM. For this embodiment, memory module 925 also includes a plurality of integrated circuit memory stacks (921, 922, 923, 924, 926, 927, 928, 929 and 931) apportioned into a mirrored memory section 956 and a working memory section 958. For this embodiment, when memory mirroring is enabled, all chips in the mirrored memory section for each stack in the DIMM are used as the working memory.



FIG. 9
e is a block diagram illustrating one embodiment for enabling memory mirroring on a per stack basis. For this embodiment, memory module 935 includes a plurality of integrated circuit memory stacks (941, 942, 943, 944, 945, 946, 947, 948 and 949) apportioned into a mirrored section 961 (labeled “M”) and a working memory section 963 (labeled “W”). For this embodiment, when a predetermined threshold of errors occurs from a portion of the working memory, mirrored memory from the corresponding stack is replaced with working memory. For example, if data errors occurred in DQ[7:0] and exceed a threshold, then mirrored memory section 961 (labeled “Mu”) replaces working memory section 963 (labeled “uW”) for stack 949 only.


In one embodiment, memory RAID within a (p+1)-chip stack may be implemented by storing data across p chips and storing the parity (i.e. the error correction code or information) in a separate chip (i.e. the parity chip). So, when a block of data is written to the stack, the block is broken up into p equal sized portions and each portion of data is written to a separate chip in the stack. That is, the data is “striped” across p chips in the stack.


To illustrate, say that the memory controller writes data block A to the memory stack. The interface chip splits this data block into p equal sized portions (A1, A2, A3, . . . , Ap) and writes A1 to the first chip in the stack, A2 to the second chip, A3 to the third chip, and so on, till Ap is written to the pth chip in the stack. In addition, the parity information for the entire data block A is computed by the interface chip and stored in the parity chip. When the memory controller sends a read request for data block A, the interface chip reads A1, A2, A3, . . . Ap from the first, second, third, . . . , pth chip respectively to form data block A. In addition, it reads the stored parity information for data block A. If the memory controller detects an error in the data read back from any of the chips in the stack, the memory controller may instruct the interface chip to re-create the correct data using the parity information and the correct portions of the data block A.


Embodiments for memory RAID within a memory stack are shown in FIGS. 10a and 10b.



FIG. 10
a is a block diagram illustrating a stack of memory chips with memory RAID capability during execution of a write operation. Memory device 1000 includes an interface chip 1010 to interface “p+1” memory chips (1015, 1020, 1025, and 1030) to an external memory bus. FIG. 10a shows a write operation of a data block “A”, wherein data for data block “A” is written into memory chips as follows.


A=Ap . . . A2, A1;


Parity[A]=(Ap)n . . . n(A2), n(A1),


wherein, “n” is the bitwise exclusive OR operator.



FIG. 10
b is a block diagram illustrating a stack of memory chips with memory RAID capability during a read operation. Memory device 1040 includes interface chip 1050, “p” memory chips (1060, 1070 and 1080) and a parity memory chip 1090. For a read operation, data block “A” consists of A1, A2, . . . Ap and Parity[A], and is read from the respective memory memory chips as shown in FIG. 10b.


Note that this technique ensures that the data stored in each stack can recover from some types of errors. The memory controller may implement error correction across the data from all the memory stacks on a DIMM, and optionally, across multiple DIMMs.


In other embodiments the bits stored in the extra chip may have alternative functions than parity. As an example, the extra storage or hidden bit field may be used to tag a cacheline with the address of associated cachelines. Thus suppose the last time the memory controller fetched cacheline A, it also then fetched cacheline B (where B is a random address). The memory controller can then write back cacheline A with the address of cacheline B in the hidden bit field. Then the next time the memory controller reads cacheline A, it will also read the data in the hidden bit field and pre-fetch cacheline B. In yet other embodiments, metadata or cache tags or prefetch information may be stored in the hidden bit field.


With conventional high speed DRAMs, addition of extra memory involves adding extra electrical loads on the high speed memory bus that connects the memory chips to the memory controller, as shown in FIG. 11.



FIG. 11 illustrates conventional impedance loading as a result of adding DRAMs to a high-speed memory bus. For this embodiment, memory controller 1110 accesses memory on high-speed bus 1115. The load of a conventional DRAM on high-speed memory bus 1115 is illustrated in FIG. 11 (1120). To add additional memory capacity in a conventional manner, memory chips are added to the high-speed bus 1115, and consequently additional loads (1125 and 1130) are also added to the high-speed memory bus 1115.


As the memory bus speed increases, the number of chips that can be connected in parallel to the memory bus decreases. This places a limit on the maximum memory capacity. Alternately stated, as the number of parallel chips on the memory bus increases, the speed of the memory bus must decrease. So, we have to accept lower speed (and lower memory performance) in order to achieve high memory capacity.


Separating a high speed DRAM into a high speed interface chip and a low speed memory chip facilitates easy addition of extra memory capacity without negatively impacting the memory bus speed and memory system performance. A single high speed interface chip can be connected to some or all of the lines of a memory bus, thus providing a known and fixed load on the memory bus. Since the other side of the interface chip runs at a lower speed, multiple low speed memory chips can be connected to (the low speed side of) the interface chip without sacrificing performance, thus providing the ability to upgrade memory. In effect, the electrical loading of additional memory chips has been shifted from a high speed bus (which is the case today with conventional high speed DRAMs) to a low speed bus. Adding additional electrical loads on a low speed bus is always a much easier problem to solve than that of adding additional electrical loads on a high speed bus.



FIG. 12 illustrates impedance loading as a result of adding DRAMs to a high-speed memory bus in accordance with one embodiment. For this embodiment, memory controller 1210 accesses a high-speed interface chip 1200 on high-speed memory bus 1215. The load 1220 from the high-speed interface chip is shown in FIG. 12. A low speed bus 1240 couples to high-speed interface chip 1200. The loads of the memory chips (1230 and 1225) are applied to low speed bus 1240. As a result, additional loads are not added to high-speed memory bus 1215.


The number of low speed memory chips that are connected to the interface chip may either be fixed at the time of the manufacture of the memory stack or may be changed after the manufacture. The ability to upgrade and add extra memory capacity after the manufacture of the memory stack is particularly useful in markets such as desktop PCs where the user may not have a clear understanding of the total system memory capacity that is needed by the intended applications. This ability to add additional memory capacity will become very critical when the PC industry adopts DDR3 memories in several major market segments such as desktops and mobile. The reason is that at DDR3 speeds, it is expected that only one DIMM can be supported per memory channel. This means that there is no easy way for the end user to add additional memory to the system after the system has been built and shipped.


In order to provide the ability to increase the memory capacity of a memory stack, a socket may be used to add at least one low speed memory chip. In one aspect, the socket can be on the same side of the printed circuit board (PCB) as the memory stack but be adjacent to the memory stack, wherein a memory stack may consist of at least one high speed interface chip or at least one high speed interface chip and at least one low speed memory chip. FIG. 13 is a block diagram illustrating one embodiment for adding low speed memory chips using a socket. For this embodiment, a printed circuit board (PCB) 1300, such as a DIMM, includes one or more stacks of high speed interface chips. In other embodiments, the stacks also include low-speed memory chips. As shown in FIG. 13, one or more sockets (1310) are mounted on the PCB 1300 adjacent to the stacks 1320. Low-speed memory chips may be added to the sockets to increase the memory capacity of the PCB 1300. Also, for this embodiment, the sockets 1310 are located on the same side of the PCB 1300 as stacks 1320.


In situations where the PCB space is limited or the PCB dimensions must meet some industry standard or customer requirements, the socket for additional low speed memory chips can be designed to be on the same side of the PCB as the memory stack and sit on top of the memory stack, as shown in FIG. 14.



FIG. 14 illustrates a PCB with a socket located on top of a stack. PCB 1400 includes a plurality of stacks (1420). A stack contains a high speed interface chip and optionally, one or more low speed memory chips. For this embodiment, a socket (1410) sits on top of one or more stacks. Memory chips are placed in the socket(s) (1410) to add memory capacity to the PCB (e.g., DIMM). Alternately, the socket for the additional low speed memory chips can be designed to be on the opposite side of the PCB from the memory stack, as shown in FIG. 15.



FIG. 15 illustrates a PCB with a socket located on the opposite side from the stack. For this embodiment, PCB 1500, such as a DIMM, comprises one or more stacks (1520) containing high speed interface chips, and optionally, one or more low speed memory chips. For this embodiment, one or more sockets (1510) are mounted on the opposite side of the PCB from the stack as shown in FIG. 15. The low speed memory chips may be added to the memory stacks one at a time. That is, each stack may have an associated socket. In this case, adding additional capacity to the memory system would involve adding one or more low speed memory chips to each stack in a memory rank (a rank denotes all the memory chips or stacks that respond to a memory access; i.e. all the memory chips or stacks that are enabled by a common Chip Select signal). Note that the same number and density of memory chips must be added to each stack in a rank. An alternative method might be to use a common socket for all the stacks in a rank. In this case, adding additional memory capacity might involve inserting a PCB into the socket, wherein the PCB contains multiple memory chips, and there is at least one memory chip for each stack in the rank. As mentioned above, the same number and density of memory chips must be added to each stack in the rank.


Many different types of sockets can be used. For example, the socket may be a female type and the PCB with the upgrade memory chips may have associated male pins.



FIG. 16 illustrates an upgrade PCB that contains one or more memory chips. For this embodiment, an upgrade PCB 1610 includes one or more memory chips (1620). As shown in FIG. 16, PCB 1610 includes male socket pins 1630. A female receptacle socket 1650 on a DIMM PCB mates with the male socket pins 1630 to upgrade the memory capacity to include additional memory chips (1620). Another approach would be to use a male type socket and an upgrade PCB with associated female receptacles.


Separating a high speed DRAM into a low speed memory chip and a high speed interface chip and stacking multiple memory chips behind an interface chip ensures that the performance penalty associated with stacking multiple chips is minimized. However, this approach requires changes to the architecture of current DRAMs, which in turn increases the time and cost associated with bringing this technology to the marketplace. A cheaper and quicker approach is to stack multiple off-the-shelf high speed DRAM chips behind a buffer chip but at the cost of higher latency.


Current off-the-shelf high speed DRAMs (such as DDR2 SDRAMs) use source synchronous strobe signals as the timing reference for bi-directional transfer of data. In the case of a 4-bit wide DDR or DDR2 SDRAM, a dedicated strobe signal is associated with the four data signals of the DRAM. In the case of an 8-bit wide chip, a dedicated strobe signal is associated with the eight data signals. For 16-bit and 32-bit chips, a dedicated strobe signal is associated with each set of eight data signals. Most memory controllers are designed to accommodate a dedicated strobe signal for every four or eight data lines in the memory channel or bus. Consequently, due to signal integrity and electrical loading considerations, most memory controllers are capable of connecting to only nine or 18 memory chips (in the case of a 72-bit wide memory channel) per rank. This limitation on connectivity means that two 4-bit wide high speed memory chips may be stacked on top of each other on an industry standard DIMM today, but that stacking greater than two chips is difficult. It should be noted that stacking two 4-bit wide chips on top of each other doubles the density of a DIMM. The signal integrity problems associated with more than two DRAMs in a stack make it difficult to increase the density of a DIMM by more than a factor of two today by using stacking techniques.


Using the stacking technique described below, it is possible to increase the density of a DIMM by four, six or eight times by correspondingly stacking four, six or eight DRAMs on top of each other. In order to do this, a a buffer chip is located between the external memory channel and the DRAM chips and buffers at least one of the address, control, and data signals to and from the DRAM chips. In one implementation, one buffer chip may be used per stack. In other implementations, more than one buffer chip may be used per stack. In yet other implementations, one buffer chip may be used for a plurality of stacks.



FIG. 17 is a block diagram illustrating one embodiment for stacking memory chips. For this embodiment, buffer chip 1810 is coupled to a host system, typically to the memory controller of the system. Memory device 1800 contains at least two high-speed memory chips 1820 (e.g., DRAMs such as DDR2 SDRAMs) stacked behind the buffer chip 1810 (e.g., the high-speed memory chips 1820 are accessed by buffer chip 1810).


It is clear that the embodiment shown in FIG. 17 is similar to that described previously and illustrated in FIG. 3. The main difference is that in the scheme illustrated in FIG. 3, multiple low speed memory chips were stacked on top of a high speed interface chip. The high speed interface chip presented an industry-standard interface (such as DDR SDRAM or DDR2 SDRAM) to the host system while the interface between the high speed interface chip and the low speed memory chips may be non-standard (i.e. proprietary) or may conform to an industry standard. The scheme illustrated in FIG. 17, on the other hand, stacks multiple high speed, off-the-shelf DRAMs on top of a high speed buffer chip. The buffer chip may or may not perform protocol translation (i.e. the buffer chip may present an industry-standard interface such as DDR2 to both the external memory channel and to the high speed DRAM chips) and may simply isolate the electrical loads represented by the memory chips (i.e. the input parasitics of the memory chips) from the memory channel.


In other implementations the buffer chip may perform protocol translations. For example, the buffer chip may provide translation from DDR3 to DDR2. In this fashion, multiple DDR2 SDRAM chips might appear to the host system as one or more DDR3 SDRAM chips. The buffer chip may also translate from one version of a protocol to another version of the same protocol. As an example of this type of translation, the buffer chip may translate from one set of DDR2 parameters to a different set of DDR2 parameters. In this way the buffer chip might, for example, make one or more DDR2 chips of one type (e.g. 4-4-4 DDR2 SDRAM) appear to the host system as one of more DDR2 chips of a different type (e.g. 6-6-6 DDR2 SDRAM). Note that in other implementations, a buffer chip may be shared by more than one stack. Also, the buffer chip may be external to the stack rather than being part of the stack. More than one buffer chip may also be associated with a stack.


Using a buffer chip to isolate the electrical loads of the high speed DRAMs from the memory channel allows us to stack multiple (typically between two and eight) memory chips on top of a buffer chip. In one embodiment, all the memory chips in a stack may connect to the same address bus. In another embodiment, a plurality of address buses may connect to the memory chips in a stack, wherein each address bus connects to at least one memory chip in the stack. Similarly, the data and strobe signals of all the memory chips in a stack may connect to the same data bus in one embodiment, while in another embodiment, multiple data buses may connect to the data and strobe signals of the memory chips in a stack, wherein each memory chip connects to only one data bus and each data bus connects to at least one memory chip in the stack.


Using a buffer chip in this manner allows a first number of DRAMS to simulate at least one DRAM of a second number. In the context of the present description, the simulation may refer to any simulating, emulating, disguising, and/or the like that results in at least one aspect (e.g. a number in this embodiment, etc.) of the DRAMs appearing different to the system. In different embodiments, the simulation may be electrical in nature, logical in nature, and/or performed in any other desired manner. For instance, in the context of electrical simulation, a number of pins, wires, signals, etc. may be simulated, while, in the context of logical simulation, a particular function may be simulated.


In still additional aspects of the present embodiment, the second number may be more or less than the first number. Still yet, in the latter case, the second number may be one, such that a single DRAM is simulated. Different optional embodiments which may employ various aspects of the present embodiment will be set forth hereinafter.


In still yet other embodiments, the buffer chip may be operable to interface the DRAMs and the system for simulating at least one DRAM with at least one aspect that is different from at least one aspect of at least one of the plurality of the DRAMs. In accordance with various aspects of such embodiment, such aspect may include a signal, a capacity, a timing, a logical interface, etc. Of course, such examples of aspects are set forth for illustrative purposes only and thus should not be construed as limiting, since any aspect associated with one or more of the DRAMs may be simulated differently in the foregoing manner.


In the case of the signal, such signal may include an address signal, control signal, data signal, and/or any other signal, for that matter. For instance, a number of the aforementioned signals may be simulated to appear as fewer or more signals, or even simulated to correspond to a different type. In still other embodiments, multiple signals may be combined to simulate another signal. Even still, a length of time in which a signal is asserted may be simulated to be different.


In the case of capacity, such may refer to a memory capacity (which may or may not be a function of a number of the DRAMs). For example, the buffer chip may be operable for simulating at least one DRAM with a first memory capacity that is greater than (or less than) a second memory capacity of at least one of the DRAMs.


In the case where the aspect is timing-related, the timing may possibly relate to a latency (e.g. time delay, etc.). In one aspect of the present embodiment, such latency may include a column address strobe (CAS) latency (tCAS), which refers to a latency associated with accessing a column of data. Still yet, the latency may include a row address strobe (RAS) to CAS latency (tRCD), which refers to a latency required between RAS and CAS. Even still, the latency may include a row precharge latency (tRP), which refers a latency required to terminate access to an open row. Further, the latency may include an active to precharge latency (tRAS), which refers to a latency required to access a certain row of data between a data request and a precharge command. In any case, the buffer chip may be operable for simulating at least one DRAM with a first latency that is longer (or shorter) than a second latency of at least one of the DRAMs. Different optional embodiments which employ various features of the present embodiment will be set forth hereinafter.


In still another embodiment, a buffer chip may be operable to receive a signal from the system and communicate the signal to at least one of the DRAMs after a delay. Again, the signal may refer to an address signal, a command signal (e.g. activate command signal, precharge command signal, a write signal, etc.) data signal, or any other signal for that matter. In various embodiments, such delay may be fixed or variable.


As an option, the delay may include a cumulative delay associated with any one or more of the aforementioned signals. Even still, the delay may time shift the signal forward and/or back in time (with respect to other signals). Of course, such forward and backward time shift may or may not be equal in magnitude. In one embodiment, this time shifting may be accomplished by utilizing a plurality of delay functions which each apply a different delay to a different signal.


Further, it should be noted that the aforementioned buffer chip may include a register, an advanced memory buffer (AMB), a component positioned on at least one DIMM, a memory controller, etc. Such register may, in various embodiments, include a Joint Electron Device Engineering Council (JEDEC) register, a JEDEC register including one or more functions set forth herein, a register with forwarding, storing, and/or buffering capabilities, etc. Different optional embodiments, which employ various features, will be set forth hereinafter.


In various embodiments, it may be desirable to determine whether the simulated DRAM circuit behaves according to a desired DRAM standard or other design specification. A behavior of many DRAM circuits is specified by the JEDEC standards and it may be desirable, in some embodiments, to exactly simulate a particular JEDEC standard DRAM. The JEDEC standard defines commands that a DRAM circuit must accept and the behavior of the DRAM circuit as a result of such commands. For example, the JEDEC specification for a DDR2 DRAM is known as JESD79-2B.


If it is desired, for example, to determine whether a JEDEC standard is met, the following algorithm may be used. Such algorithm checks, using a set of software verification tools for formal verification of logic, that protocol behavior of the simulated DRAM circuit is the same as a desired standard or other design specification. This formal verification is quite feasible because the DRAM protocol described in a DRAM standard is typically limited to a few protocol commands (e.g. approximately 15 protocol commands in the case of the JEDEC DDR2 specification, for example).


Examples of the aforementioned software verification tools include MAGELLAN supplied by SYNOPSYS, or other software verification tools, such as INCISIVE supplied by CADENCE, verification tools supplied by JASPER, VERIX supplied by REAL INTENT, 0-IN supplied by MENTOR CORPORATION, and others. These software verification tools use written assertions that correspond to the rules established by the DRAM protocol and specification. These written assertions are further included in the code that forms the logic description for the buffer chip. By writing assertions that correspond to the desired behavior of the simulated DRAM circuit, a proof may be constructed that determines whether the desired design requirements are met. In this way, one may test various embodiments for compliance with a standard, multiple standards, or other design specification.


For instance, an assertion may be written that no two DRAM control signals are allowed to be issued to an address, control and clock bus at the same time. Although one may know which of the various buffer chip and DRAM stack configurations and address mappings that have been described herein are suitable, the aforementioned algorithm may allow a designer to prove that the simulated DRAM circuit exactly meets the required standard or other design specification. If, for example, an address mapping that uses a common bus for data and a common bus for address results in a control and clock bus that does not meet a required specification, alternative designs for the buffer chip with other bus arrangements or alternative designs for the interconnect between the buffer chip and other components may be used and tested for compliance with the desired standard or other design specification.


The buffer chip may be designed to have the same footprint (or pin out) as an industry-standard DRAM (e.g. a DDR2 SDRAM footprint). The high speed DRAM chips that are stacked on top of the buffer chip may either have an industry-standard pin out or can have a non-standard pin out. This allows us to use a standard DIMM PCB since each stack has the same footprint as a single industry-standard DRAM chip. Several companies have developed proprietary ways to stack multiple DRAMs on top of each other (e.g. μZ Ball Stack from Tessera, Inc., High Performance Stakpak from Staktek Holdings, Inc.). The disclosed techniques of stacking multiple memory chips behind either a buffer chip (FIG. 18) or a high speed interface chip (FIG. 3) is compatible with all the different ways of stacking memory chips. It does not require any particular stacking technique.


A double sided DIMM (i.e. a DIMM that has memory chips on both sides of the PCB) is electrically worse than a single sided DIMM, especially if the high speed data and strobe signals have to be routed to two DRAMs, one on each side of the board. This implies that the data signal might have to split into two branches (i.e. a T topology) on the DIMM, each branch terminating at a DRAM on either side of the board. A T topology is typically worse from a signal integrity perspective than a point-to-point topology. Rambus used mirror packages on double sided Rambus In-line Memory Modules (RIMMs) so that the high speed signals had a point-to-point topology rather than a T topology. This has not been widely adopted by the DRAM makers mainly because of inventory concerns. In this disclosure, the buffer chip may be designed with an industry-standard DRAM pin out and a mirrored pin out. The DRAM chips that are stacked behind the buffer chip may have a common industry-standard pin out, irrespective of whether the buffer chip has an industry-standard pin out or a mirrored pin out. This allows us to build double sided DIMMs that are both high speed and high capacity by using mirrored packages and stacking respectively, while still using off-the-shelf DRAM chips. Of course, this requires the use of a non-standard DIMM PCB since the standard DIMM PCBs are all designed to accommodate standard (i.e. non-mirrored) DRAM packages on both sides of the PCB.


In another aspect, the buffer chip may be designed not only to isolate the electrical loads of the stacked memory chips from the memory channel but also have the ability to provide redundancy features such as memory sparing, memory mirroring, and memory RAID. This allows us to build high density DIMMs that not only have the same footprint (i.e. pin compatible) as industry-standard memory modules but also provide a full suite of redundancy features. This capability is important for key segments of the server market such as the blade server segment and the 1U rack server segment, where the number of DIMM slots (or connectors) is constrained by the small form factor of the server motherboard. Many analysts have predicted that these will be the fastest growing segments in the server market.


Memory sparing may be implemented with one or more stacks of p+q high speed memory chips and a buffer chip. The p memory chips of each stack are assigned to the working pool and are available to system resources such as the operating system (OS) and application software. When the memory controller (or optionally the AMB) detects that one of the memory chips in the stack's working pool has, for example, generated an uncorrectable multi-bit error or has generated correctable errors that exceeded a pre-defined threshold, it may choose to replace the faulty chip with one of the q chips that have been placed in the spare pool. As discussed previously, the memory controller may choose to do the sparing across all the stacks in a rank even though only one working chip in one specific stack triggered the error condition, or may choose to confine the sparing operation to only the specific stack that triggered the error condition. The former method is simpler to implement from the memory controller's perspective while the latter method is more fault-tolerant. Memory sparing was illustrated in FIG. 8 for stacks built with a high speed interface chip and multiple low speed DRAMs. The same method is applicable to stacks built with high speed, off-the-shelf DRAMs and a buffer chip. In other implementations, the buffer chip may not be part of the stack. In yet other implementations, a buffer chip may be used with a plurality of stacks of memory chips or a plurality of buffer chips may be used by a single stack of memory chips.


Memory mirroring can be implemented by dividing the high speed memory chips in a stack into two equal sets—a working set and a mirrored set. When the memory controller writes data to the memory, the buffer chip writes the data to the same location in both the working set and the mirrored set. During reads, the buffer chip returns the data from the working set. If the returned data had an uncorrectable error condition or if the cumulative correctable en ors in the returned data exceeded a pre-defined threshold, the memory controller may instruct the buffer chip to henceforth return data (on memory reads) from the mirrored set until the error condition in the working set has been rectified. The buffer chip may continue to send writes to both the working set and the mirrored set or may confine it to just the mirrored set. As discussed before, the memory mirroring operation may be triggered simultaneously on all the memory stacks in a rank or may be done on a per-stack basis as and when necessary. The former method is easier to implement while the latter method provides more fault tolerance. Memory mirroring was illustrated in FIG. 9 for stacks built with a high speed interface chip and multiple low speed memory chips. The same method is applicable to stacks built with high speed, off-the-shelf DRAMs and a buffer chip. In other implementations, the buffer chip may not be part of the stack. In yet other implementations, a buffer chip may be used with a plurality of stacks of memory chips or a plurality of buffer chips may be used by a single stack of memory chips.


Implementing memory mirroring within a stack has one drawback, namely that it does not protect against the failure of the buffer chip associated with a stack. In this case, the data in the memory is mirrored in two different memory chips in a stack but both these chips have to communicate to the host system through the common associated buffer chip. So, if the buffer chip in a stack were to fail, the mirrored memory capability is of no use. One solution to this problem is to group all the chips in the working set into one stack and group all the chips in the mirrored set into another stack. The working stack may now be on one side of the DIMM PCB while the mirrored stack may be on the other side of the DIMM PCB. So, if the buffer chip in the working stack were to fail now, the memory controller may switch to the mirrored stack on the other side of the PCB.


The switch from the working set to the mirrored set may be triggered by the memory controller (or AMB) sending an in-band or sideband signal to the buffers in the respective stacks. Alternately, logic may be added to the buffers so that the buffers themselves have the ability to switch from the working set to the mirrored set. For example, some of the server memory controller hubs (MCH) from Intel will read a memory location for a second time if the MCH detects an uncorrectable error on the first read of that memory location. The buffer chip may be designed to keep track of the addresses of the last m reads and to compare the address of the current read with the stored m addresses. If it detects a match, the most likely scenario is that the MCH detected an uncorrectable error in the data read back and is attempting a second read to the memory location in question. The buffer chip may now read the contents of the memory location from the mirrored set since it knows that the contents in the corresponding location in the working set had an error. The buffer chip may also be designed to keep track of the number of such events (i.e. a second read to a location due to an uncorrectable error) over some period of time. If the number of these events exceeded a certain threshold within a sliding time window, then the buffer chip may permanently switch to the mirrored set and notify an external device that the working set was being disabled.


Implementing memory RAID within a stack that consists of high speed, off-the-shelf DRAMs is more difficult than implementing it within a stack that consists of non-standard DRAMs. The reason is that current high speed DRAMs have a minimum burst length that require a certain amount of information to be read from or written to the DRAM for each read or write access respectively. For example, an n-bit wide DDR2 SDRAM has a minimum burst length of 4 which means that for every read or write operation, 4n bits must be read from or written to the DRAM. For the purpose of illustration, the following discussion will assume that all the DRAMs that are used to build stacks are 8-bit wide DDR2 SDRAMs, and that each stack has a dedicated buffer chip.


Given that 8-bit wide DDR2 SDRAMs are used to build the stacks, eight stacks will be needed per memory rank (ignoring the ninth stack needed for ECC). Since DDR2 SDRAMs have a minimum burst length of four, a single read or write operation involves transferring four bytes of data between the memory controller and a stack. This means that the memory controller must transfer a minimum of 32 bytes of data to a memory rank (four bytes per stack*eight stacks) for each read or write operation. Modern CPUs typically use a 64-byte cacheline as the basic unit of data transfer to and from the system memory. This implies that eight bytes of data may be transferred between the memory controller and each stack for a read or write operation.


In order to implement memory RAID within a stack, we may build a stack that contains 3 8-bit wide DDR2 SDRAMs and a buffer chip. Let us designate the three DRAMs in a stack as chips A, B, and C. Consider the case of a memory write operation where the memory controller performs a burst write of eight bytes to each stack in the rank (i.e. memory controller sends 64 bytes of data—one cacheline—to the entire rank). The buffer chip may be designed such that it writes the first four bytes (say, bytes Z0, Z1, Z2, and Z3) to the specified memory locations (say, addresses x1, x2, x3, and x4) in chip A and writes the second four bytes (say, bytes Z4, Z5, Z6, and Z7) to the same locations (i.e. addresses x1, x2, x3, and x4) in chip B. The buffer chip may also be designed to store the parity information corresponding to these eight bytes in the same locations in chip C. That is, the buffer chip will store P[0,4]=Z0 ^ Z4 in address x1 in chip C, P[1,5]=Z1 ^ Z5 in address x2 in chip C, P[2,6]=Z2 ^ Z6 in address x3 in chip C, and P[3,7],=Z3 ^ Z7 in address x4 in chip C, where ^ is the bitwise exclusive-OR operator. So, for example, the least significant bit (bit 0) of P[0,4] is the exclusive-OR of the least significant bits of Z0 and Z4, bit 1 of P[0,4] is the exclusive-OR of bit 1 of Z0 and bit 1 of Z4, and so on. Note that other striping methods may also be used. For example, the buffer chip may store bytes Z0, Z2, Z4, and Z6 in chip A and bytes Z1, Z3, Z5, and Z7 in chip B.


Now, when the memory controller reads the same cacheline back, the buffer chip will read locations x1, x2, x3, and x4 in both chips A and B and will return bytes Z0, Z1, Z2, and Z3 from chip A and then bytes Z4, Z5, Z6, and Z7 from chip B. Now let us assume that the memory controller detected a multi-bit error in byte Z1. As mentioned previously, some of the Intel server MCHs will re-read the address location when they detect an uncorrectable error in the data that was returned in response to the initial read command. So, when the memory controller re-reads the address location corresponding to byte Z1, the buffer chip may be designed to detect the second read and return P[1,5] ^ Z5 rather than Z1 since it knows that the memory controller detected an uncorrectable error in Z1.


Note that the behavior of the memory controller after the detection of an uncorrectable error will influence the error recovery behavior of the buffer chip. For example, if the memory controller reads the entire cacheline back in the event of an uncorrectable error but requests the burst to start with the bad byte, then the buffer chip may be designed to look at the appropriate column addresses to determine which byte corresponds to the uncorrectable error. For example, say that byte Z1 corresponds to the uncorrectable error and that the memory controller requests that the stack send the eight bytes (Z0 through Z7) back to the controller starting with byte Z1. In other words, the memory controller asks the stack to send the eight bytes back in the following order: Z1, Z2, Z3, Z0, Z5, Z6, Z7, and Z4 (i.e. burst length=8, burst type=sequential, and starting column address A[2:0]=001b). The buffer chip may be designed to recognize that this indicates that byte Z1 corresponds to the uncorrectable error and return P[1,5] ^ Z5, Z2, Z3, Z0, Z5, Z6, Z7, and Z4. Alternately, the buffer chip may be designed to return P[1,5] ^ Z5, P[2,6] ^ Z6, P[3,7] ^ Z7, P[0,4] ^ Z4, Z5, Z6, Z7, and Z4 if it is desired to correct not only an uncorrectable error in any given byte but also the case where an entire chip (in this case, chip A) fails. If, on the other hand, the memory controller reads the entire cacheline in the same order both during a normal read operation and during a second read caused by an uncorrectable error, then the controller has to indicate to the buffer chip which byte or chip corresponds to the uncorrectable error either through an in-band signal or through a sideband signal before or during the time it performs the second read.


However, it may be that the memory controller does a 64-byte cacheline read or write in two separate bursts of length 4 (rather than a single burst of length 8). This may also be the case when an I/O device initiates the memory access. This may also be the case if the 64-byte cacheline is stored in parallel in two DIMMs. In such a case, the memory RAID implementation might require the use of the DM (Data Mask) signal. Again, consider the case of a 3-chip stack that is built with 3 8-bit wide DDR2 SDRAMs and a buffer chip. Memory RAID requires that the 4 bytes of data that are written to a stack be striped across the two memory chips (i.e. 2 bytes be written to each of the memory chips) while the parity is computed and stored in the third memory chip. However, the DDR2 SDRAMs have a minimum burst length of 4, meaning that the minimum amount of data that they are designed to transfer is 4 bytes. In order to satisfy both these requirements, the buffer chip may be designed to use the DM signal to steer two of the four bytes in a burst to chip A and steer the other two bytes in a burst to chip B. This concept is best illustrated by the example below.


Say that the memory controller sends bytes Z0, Z1, Z2, and Z3 to a particular stack when it does a 32-byte write to a memory rank, and that the associated addresses are x1, x2, x3, and x4. The stack in this example is composed of three 8-bit DDR2 SDRAMs (chips A, B, and C) and a buffer chip. The buffer chip may be designed to generate a write command to locations x1, x2, x3, and x4 on all the three chips A, B, and C, and perform the following actions:


Write Z0 and Z2 to chip A and mask the writes of Z1 and Z3 to chip A

    • Write Z1 and Z3 to chip B and mask the writes of Z0 and Z2 to chip B
    • Write (Z0 ^ Z1) and (Z2 ^ Z3) to chip C and mask the other two writes


This of course requires that the buffer chip have the capability to do a simple address translation so as to hide the implementation details of the memory RAID from the memory controller. FIG. 18 is a timing diagram for implementing memory RAID using a datamask (DM) signal in a three chip stack composed of 8 bit wide DDR2 SDRAMS. The first signal of the timing diagram of FIG. 18 represents data sent to the stack from the host system. The second and third signals, labeled DQ_A and DM_A, represent the data and data mask signals sent by the buffer chip to chip A during a write operation to chip A. Similarly, signals DQ_B and DM_B represent signals sent by the buffer chip to chip B during a write operation to chip B, and signals DQ_C and DM_C represent signals sent by the buffer chip to chip C during a write operation to chip C.


Now when the memory controller reads back bytes Z0, Z1, Z2, and Z3 from the stack, the buffer chip will read locations x1, x2, x3, and x4 from both chips A and B, select the appropriate two bytes from the four bytes returned by each chip, re-construct the original data, and send it back to the memory controller. It should be noted that the data striping across the two chips may be done in other ways. For example, bytes Z0 and Z1 may be written to chip A and bytes Z2 and Z3 may be written to chip B. Also, this concept may be extended to stacks that are built with a different number of chips. For example, in the case of stack built with five 8-bit wide DDR2 SDRAM chips and a buffer chip, a 4-byte burst to a stack may be striped across four chips by writing one byte to each chip and using the DM signal to mask the remaining three writes in the burst. The parity information may be stored in the fifth chip, again using the associated DM signal.


As described previously, when the memory controller (or AMB) detects an uncorrectable error in the data read back, the buffer chip may be designed to re-construct the bad data using the data in the other chips as well as the parity information. The buffer chip may perform this operation either when explicitly instructed to do so by the memory controller or by monitoring the read requests sent by the memory controller and detecting multiple reads to the same address within some period of time, or by some other means.


Re-constructing bad data using the data from the other memory chips in the memory RAID and the parity data will require some additional amount of time. That is, the memory read latency for the case where the buffer chip has to re-construct the bad data may most likely be higher than the normal read latency. This may be accommodated in multiple ways. Say that the normal read latency is 4 clock cycles while the read latency when the buffer chip has to re-create the bad data is 5 clock cycles. The memory controller may simply choose to use 5 clock cycles as the read latency for all read operations. Alternately, the controller may default to 4 clock cycles for all normal read operations but switch to 5 clock cycles when the buffer chip has to re-create the data. Another option would be for the buffer chip to stall the memory controller when it has to re-create some part of the data. These and other methods fall within the scope of this disclosure.


As discussed above, we can implement memory RAID using a combination of memory chips and a buffer chip in a stack. This provides us with the ability to correct multi-bit errors either within a single memory chip or across multiple memory chips in a rank. However, we can create an additional level of redundancy by adding additional memory chips to the stack. That is, if the memory RAID is implemented across n chips (where the data is striped across n−1 chips and the parity is stored in the nth chip), we can create another level of redundancy by building the stack with at least n+1 memory chips. For the purpose of illustration, assume that we wish to stripe the data across two memory chips (say, chips A and B). We need a third chip (say, chip C) to store the parity information. By adding a fourth chip (chip D) to the stack, we can create an additional level of redundancy. Say that chip B has either failed or is generating an unacceptable level of uncorrectable errors. The buffer chip in the stack may re-construct the data in chip B using the data in chip A and the parity information in chip C in the same manner that is used in well-known disk RAID systems. Obviously, the performance of the memory system may be degraded (due to the possibly higher latency associated with re-creating the data in chip B) until chip B is effectively replaced. However, since we have an unused memory chip in the stack (chip D), we may substitute it for chip B until the next maintenance operation. The buffer chip may be designed to re-create the data in chip B (using the data in chip A and the parity information in chip C) and write it to chip D. Once this is completed, chip B may be discarded (i.e. no longer used by the buffer chip). The re-creation of the data in chip B and the transfer of the re-created data to chip D may be made to run in the background (i.e. during the cycles when the rank containing chips A, B, C, and D are not used) or may be performed during cycles that have been explicitly scheduled by the memory controller for the data recovery operation.


The logic necessary to implement the higher levels of memory protection such as memory sparing, memory mirroring, and memory RAID may be embedded in a buffer chip associated with each stack or may be implemented in a “more global” buffer chip (i.e. a buffer chip that buffers more data bits than is associated with an individual stack). For example, this logic may be embedded in the AMB. This variation is also covered by this disclosure.


The method of adding additional low speed memory chips behind a high speed interface by means of a socket was disclosed. The same concepts (see FIGS. 12, 13, 14, and 15) are applicable to stacking high speed, off-the-shelf DRAM chips behind a buffer chip. This is also covered by this invention.


Although the present invention has been described in terms of specific exemplary embodiments, it will be appreciated that various modifications and alterations might be made by those skilled in the art without departing from the spirit and scope of the invention.

Claims
  • 1. A memory device comprising: a plurality of dynamic random access memory (“DRAM”) integrated circuits stacked in a vertical direction; anda buffer integrated circuit for providing an interface between the plurality of DRAM integrated circuits and a memory bus by buffering at least one of address, control or data signals so as to isolate electrical loads of the plurality of DRAM integrated circuits from the memory bus, wherein the buffer integrated circuit is configured to perform conversion between signal timing of a first protocol at the memory bus and signal timing of a second protocol for accessing at least one of the plurality of DRAM integrated circuits,wherein the buffer integrated circuit is configured to track addresses of “m” previous reads and compare an address of a current read with the tracked addresses of the “m” previous reads.
  • 2. The memory device as set forth in claim 1, wherein the buffer integrated circuit is configured to provide an interface for a plurality of stacks of the DRAM integrated circuits.
  • 3. The memory device as set forth in claim 1, wherein said stacked DRAM integrated circuits comprise p+q DRAM integrated circuits, wherein “p” DRAM integrated circuits comprise a number of DRAM integrated circuits used as a working pool of memory integrated circuits, and wherein “q” DRAM integrated circuits comprise a number of DRAM integrated circuits used as a spare pool of memory integrated circuits, wherein “p” and “q” comprise integer values.
  • 4. The memory device as set forth in claim 3, wherein the buffer integrated circuit is configured to replace at least one DRAM integrated circuit from the working pool of memory integrated circuits with at least one DRAM integrated circuit from the spare pool of memory integrated circuits.
  • 5. The memory device as set forth in claim 3, wherein the buffer integrated circuit is configured to determine whether a pre-defined number of errors occurred in the working memory integrated circuits and configured to replace at least one DRAM integrated circuit from the working pool of memory integrated circuits with at least one DRAM integrated circuit from the spare pool of memory integrated circuits.
  • 6. The memory device as set forth in claim 1, further comprising a socket, coupled to the stack, and configured to accommodate adding at least one additional DRAM integrated circuit to said stack.
  • 7. The memory device of claim 1 wherein the buffer integrated circuit is configured to perform conversion between signal timing such that a signal of the second protocol is asserted for a duration that is different from a duration of the corresponding signal of the first protocol, wherein the signal of the second protocol is selected from the group consisting of address signal, control signal, and data signal.
  • 8. The memory device of claim 1, wherein the buffer integrated circuit is further configured to perform conversion between signal types of the first protocol and the second protocol.
  • 9. The memory device of claim 1, wherein, the buffer chip is configured such that, if the buffer chip detects a match between the address of the current read and one of the addresses of the “m” previous reads, the buffer chip reads contents of a memory location from a mirrored memory set instead of a working memory set.
  • 10. A method for controlling a memory device comprising a plurality of dynamic random access memory (“DRAM”) integrated circuits, the method comprising: providing an interface between the plurality of DRAM integrated circuits and a memory bus by buffering at least one of address, control or data signals so as to isolate electrical loads of the plurality of DRAM integrated circuits from the memory bus; andperforming conversion between signal timing of a first protocol at the memory bus and signal timing of a second protocol for accessing at least one of the plurality of DRAM integrated circuits;tracking addresses of “m” previous reads; andcomparing the address of a current read with the tracked addresses of the “m” previous reads.
  • 11. The method of claim 10 wherein performing conversion between signal timing further includes asserting a signal of the second protocol for a duration that is different from a duration of a corresponding signal of the first protocol, wherein the signal of the second protocol is selected from the group consisting of address signal, control signal, and data signal.
  • 12. The method of claim 10 further comprising performing conversion between signal types of the first protocol and the second protocol.
  • 13. The method of claim 10 further comprising performing conversion between a number of signals of the first protocol and a number of signals of the second protocol.
  • 14. The method of claim 10 further comprising: reading contents of a memory location from a mirrored memory set in response to detecting a match between the address of the current read and one of the addresses of the “m” previous reads.
  • 15. A memory device comprising: a plurality of dynamic random access memory (“DRAM”) integrated circuits stacked in a vertical direction; anda buffer integrated circuit for providing an interface between the plurality of DRAM integrated circuits and a memory bus by buffering at least one of address, control or data signals so as to isolate electrical loads of the plurality of DRAM integrated circuits from the memory bus, wherein the buffer integrated circuit is configured to perform conversion between signal timing of a first protocol at the memory bus and signal timing of a second protocol for accessing at least one of the plurality of DRAM integrated circuits,wherein the buffer integrated circuit is configured to track errors and to execute a read operation to a mirrored memory set based on one or more of the tracked errors.
  • 16. The memory device as set forth in claim 15 wherein the buffer integrated circuit is configured to keep track of a number of the errors that occur over a period of time and, if the number of errors that occur over the period of time exceeds a threshold, the buffer integrated circuit switches from a working memory set to the mirrored memory set.
  • 17. The memory device as set forth in claim 16 wherein the buffer integrated circuit is configured such that, upon switching from the working memory set to the mirrored memory set, the buffer integrated circuit notifies an external device that the working memory set is being disabled.
  • 18. The memory device as set forth in claim 16 wherein keeping track of the number of errors includes keeping track of a number of second reads from a memory controller to a memory location in the working memory set due to uncorrectable error.
  • 19. A memory device comprising: a plurality of dynamic random access memory (“DRAM”) integrated circuits stacked in a vertical direction; anda buffer integrated circuit for providing an interface between the plurality of DRAM integrated circuits and a memory bus by buffering at least one of address, control or data signals so as to isolate electrical loads of the plurality of DRAM integrated circuits from the memory bus, wherein the buffer integrated circuit is configured to perform conversion between signal timing of a first protocol at the memory bus and signal timing of a second protocol for accessing at least one of the plurality of DRAM integrated circuits,wherein said stacked DRAM integrated circuits comprise p+q DRAM integrated circuits, wherein “p” DRAM integrated circuits comprise a number of DRAM integrated circuits used as a working pool of memory integrated circuits, and wherein “q” DRAM integrated circuits comprise a number of DRAM integrated circuits used as a mirrored pool of memory integrated circuits, wherein “p” and “q” comprise integer values,wherein the buffer integrated circuit is configured to track addresses of “m” previous reads, compare an address of a current read with the tracked addresses of the “m” previous reads, and if the buffer integrated circuit detects a match between the address of the current read and one or more of the tracked addresses of the “m” previous reads, read contents from a memory location in the mirrored pool that corresponds to the address of the current read.
  • 20. The memory device as set forth in claim 19, wherein the buffer integrated circuit is configured to keep track of how many times the buffer integrated circuit detects a match between the address of the current read and one or more of the tracked addresses of the “m” previous reads over a period of time.
  • 21. The memory device as set forth in claim 20, wherein the buffer integrated circuit is configured such that, if the number of times that the buffer detects a match exceeds a threshold within a time window, then the buffer chip permanently switches to the mirrored pool of memory integrated circuits.
  • 22. The memory device as set forth in claim 21, wherein the buffer integrated circuit is configured, such that when the buffer integrated chip permanently switches to the mirrored pool of memory integrated circuits, the buffer integrated circuit notifies an external device that the working set has been disabled.
CROSS-REFERENCES TO RELATED APPLICATIONS

This patent application claims the benefit to United States Provisional Patent Application entitled “Methods and Apparatus of Stacking DRAMs,” Ser. No. 60/713,815, filed on Sep. 2, 2005.

US Referenced Citations (865)
Number Name Date Kind
3800292 Curley et al. Mar 1974 A
4069452 Conway et al. Jan 1978 A
4323965 Johnson et al. Apr 1982 A
4334307 Bourgeois et al. Jun 1982 A
4345319 Bernardini et al. Aug 1982 A
4392212 Miyasaka et al. Jul 1983 A
4500958 Manton et al. Feb 1985 A
4525921 Carson et al. Jul 1985 A
4566082 Anderson Jan 1986 A
4592019 Huang et al. May 1986 A
4628407 August et al. Dec 1986 A
4646128 Carson et al. Feb 1987 A
4698748 Juzswik et al. Oct 1987 A
4706166 Go Nov 1987 A
4710903 Hereth et al. Dec 1987 A
4764846 Go Aug 1988 A
4780843 Tietjen Oct 1988 A
4794597 Ooba et al. Dec 1988 A
4796232 House Jan 1989 A
4807191 Flannagan Feb 1989 A
4841440 Yonezu et al. Jun 1989 A
4862347 Rudy Aug 1989 A
4884237 Mueller et al. Nov 1989 A
4887240 Garverick et al. Dec 1989 A
4888687 Allison et al. Dec 1989 A
4899107 Corbett et al. Feb 1990 A
4912678 Mashiko Mar 1990 A
4916575 Van Asten Apr 1990 A
4922451 Lo et al. May 1990 A
4935734 Austin Jun 1990 A
4937791 Steele et al. Jun 1990 A
4956694 Eide Sep 1990 A
4982265 Watanabe et al. Jan 1991 A
4983533 Go Jan 1991 A
5025364 Zellmer Jun 1991 A
5072424 Brent et al. Dec 1991 A
5083266 Watanabe Jan 1992 A
5104820 Go et al. Apr 1992 A
5193072 Frenkil et al. Mar 1993 A
5212666 Takeda May 1993 A
5220672 Nakao et al. Jun 1993 A
5222014 Lin Jun 1993 A
5241266 Ahmad et al. Aug 1993 A
5252807 Chizinsky Oct 1993 A
5257233 Schaefer Oct 1993 A
5278796 Tillinghast et al. Jan 1994 A
5282177 McLaury Jan 1994 A
5332922 Oguchi et al. Jul 1994 A
5347428 Carson et al. Sep 1994 A
5384745 Konishi et al. Jan 1995 A
5388265 Volk Feb 1995 A
5390078 Taylor Feb 1995 A
5390334 Harrison Feb 1995 A
5392251 Manning Feb 1995 A
5408190 Wood et al. Apr 1995 A
5432729 Carson et al. Jul 1995 A
5448511 Paurus et al. Sep 1995 A
5453434 Albaugh et al. Sep 1995 A
5467455 Gay et al. Nov 1995 A
5483497 Mochizuki et al. Jan 1996 A
5498886 Hsu et al. Mar 1996 A
5502333 Bertin et al. Mar 1996 A
5502667 Bertin et al. Mar 1996 A
5513135 Dell et al. Apr 1996 A
5513339 Agrawal et al. Apr 1996 A
5519832 Warchol May 1996 A
5526320 Zagar et al. Jun 1996 A
5530836 Busch et al. Jun 1996 A
5550781 Sugawara et al. Aug 1996 A
5559990 Cheng et al. Sep 1996 A
5561622 Bertin et al. Oct 1996 A
5563086 Bertin et al. Oct 1996 A
5566344 Hall et al. Oct 1996 A
5581498 Ludwig et al. Dec 1996 A
5581779 Hall et al. Dec 1996 A
5590071 Kolor et al. Dec 1996 A
5598376 Merritt et al. Jan 1997 A
5604714 Manning et al. Feb 1997 A
5606710 Hall et al. Feb 1997 A
5608262 Degani et al. Mar 1997 A
5610864 Manning Mar 1997 A
5623686 Hall et al. Apr 1997 A
5627791 Wright et al. May 1997 A
5639749 Varney et al. Jun 1997 A
5640337 Huang et al. Jun 1997 A
5640364 Merritt et al. Jun 1997 A
5652724 Manning Jul 1997 A
5654204 Anderson Aug 1997 A
5661677 Rondeau et al. Aug 1997 A
5661695 Zagar et al. Aug 1997 A
5668773 Zagar et al. Sep 1997 A
5675549 Ong et al. Oct 1997 A
5680342 Frankeny Oct 1997 A
5682354 Manning Oct 1997 A
5692121 Bozso et al. Nov 1997 A
5692202 Kardach et al. Nov 1997 A
5696732 Zagar et al. Dec 1997 A
5696929 Hasbun et al. Dec 1997 A
5702984 Bertin et al. Dec 1997 A
5703813 Manning et al. Dec 1997 A
5706247 Merritt et al. Jan 1998 A
RE35733 Hernandez et al. Feb 1998 E
5717654 Manning Feb 1998 A
5721859 Manning Feb 1998 A
5724288 Cloud et al. Mar 1998 A
5729503 Manning Mar 1998 A
5729504 Cowles Mar 1998 A
5742792 Yanai et al. Apr 1998 A
5748914 Barth et al. May 1998 A
5752045 Chen May 1998 A
5757703 Merritt et al. May 1998 A
5760478 Bozso et al. Jun 1998 A
5761703 Bolyn Jun 1998 A
5781766 Davis Jul 1998 A
5787457 Miller et al. Jul 1998 A
5798961 Heyden et al. Aug 1998 A
5802010 Zagar et al. Sep 1998 A
5802395 Connolly et al. Sep 1998 A
5802555 Shigeeda Sep 1998 A
5812488 Zagar et al. Sep 1998 A
5818788 Kimura et al. Oct 1998 A
5819065 Chilton et al. Oct 1998 A
5831833 Shirakawa et al. Nov 1998 A
5831931 Manning Nov 1998 A
5831932 Merritt et al. Nov 1998 A
5834838 Anderson Nov 1998 A
5835435 Bogin et al. Nov 1998 A
5838165 Chatter Nov 1998 A
5838177 Keeth Nov 1998 A
5841580 Farmwald et al. Nov 1998 A
5843799 Hsu et al. Dec 1998 A
5843807 Burns Dec 1998 A
5845108 Yoo et al. Dec 1998 A
5850368 Ong et al. Dec 1998 A
5859792 Rondeau et al. Jan 1999 A
5860106 Domen et al. Jan 1999 A
5870347 Keeth et al. Feb 1999 A
5870350 Bertin Feb 1999 A
5872907 Griess et al. Feb 1999 A
5875142 Chevallier Feb 1999 A
5878279 Athenes Mar 1999 A
5884088 Kardach et al. Mar 1999 A
5901105 Ong et al. May 1999 A
5903500 Tsang et al. May 1999 A
5905688 Park May 1999 A
5907512 Parkinson et al. May 1999 A
5910010 Nishizawa et al. Jun 1999 A
5913072 Wieringa Jun 1999 A
5915105 Farmwald et al. Jun 1999 A
5915167 Leedy Jun 1999 A
5917758 Keeth Jun 1999 A
5923611 Ryan Jul 1999 A
5924111 Huang et al. Jul 1999 A
5926435 Park et al. Jul 1999 A
5929650 Pappert et al. Jul 1999 A
5943254 Bakeman, Jr. et al. Aug 1999 A
5946265 Cowles Aug 1999 A
5949254 Keeth Sep 1999 A
5953215 Karabatsos Sep 1999 A
5953263 Farmwald et al. Sep 1999 A
5954804 Farmwald et al. Sep 1999 A
5956233 Yew et al. Sep 1999 A
5962435 Mao et al. Oct 1999 A
5963429 Chen Oct 1999 A
5963463 Rondeau et al. Oct 1999 A
5963464 Dell et al. Oct 1999 A
5963504 Manning Oct 1999 A
5966724 Ryan Oct 1999 A
5966727 Nishino Oct 1999 A
5969996 Muranaka et al. Oct 1999 A
5973392 Senba et al. Oct 1999 A
5978304 Crafts Nov 1999 A
5995424 Lawrence et al. Nov 1999 A
5995443 Farmwald et al. Nov 1999 A
6001671 Fjelstad Dec 1999 A
6002613 Cloud et al. Dec 1999 A
6002627 Chevallier Dec 1999 A
6014339 Kobayashi et al. Jan 2000 A
6016282 Keeth Jan 2000 A
6026027 Terrell, II et al. Feb 2000 A
6026050 Baker et al. Feb 2000 A
6029250 Keeth Feb 2000 A
6032214 Farmwald et al. Feb 2000 A
6032215 Farmwald et al. Feb 2000 A
6034916 Lee Mar 2000 A
6034918 Farmwald et al. Mar 2000 A
6035365 Farmwald et al. Mar 2000 A
6038195 Farmwald et al. Mar 2000 A
6038673 Benn et al. Mar 2000 A
6044032 Li Mar 2000 A
6047073 Norris et al. Apr 2000 A
6047344 Kawasumi et al. Apr 2000 A
6047361 Ingenio et al. Apr 2000 A
6053948 Vaidyanathan et al. Apr 2000 A
6058451 Bermingham et al. May 2000 A
6065092 Roy May 2000 A
6069504 Keeth May 2000 A
6070217 Connolly et al. May 2000 A
6073223 McAllister et al. Jun 2000 A
6075730 Barth et al. Jun 2000 A
6075744 Tsern et al. Jun 2000 A
6078546 Lee Jun 2000 A
6079025 Fung Jun 2000 A
6084434 Keeth Jul 2000 A
6088290 Ohtake et al. Jul 2000 A
6091251 Wood et al. Jul 2000 A
RE36839 Simmons et al. Aug 2000 E
6101152 Farmwald et al. Aug 2000 A
6101564 Athenes et al. Aug 2000 A
6101612 Jeddeloh Aug 2000 A
6108795 Jeddeloh Aug 2000 A
6111812 Gans et al. Aug 2000 A
6125072 Wu Sep 2000 A
6134638 Olarig et al. Oct 2000 A
6154370 Degani et al. Nov 2000 A
6166991 Phelan Dec 2000 A
6181640 Kang Jan 2001 B1
6182184 Farmwald et al. Jan 2001 B1
6199151 Williams et al. Mar 2001 B1
6208168 Rhee Mar 2001 B1
6216246 Shau Apr 2001 B1
6222739 Bhakta et al. Apr 2001 B1
6226709 Goodwin et al. May 2001 B1
6226730 Murdoch et al. May 2001 B1
6233192 Tanaka May 2001 B1
6233650 Johnson et al. May 2001 B1
6240048 Matsubara May 2001 B1
6243282 Rondeau et al. Jun 2001 B1
6252807 Suzuki et al. Jun 2001 B1
6253278 Ryan Jun 2001 B1
6260097 Farmwald et al. Jul 2001 B1
6260154 Jeddeloh Jul 2001 B1
6262938 Lee et al. Jul 2001 B1
6266285 Farmwald et al. Jul 2001 B1
6266292 Tsern et al. Jul 2001 B1
6274395 Weber Aug 2001 B1
6279069 Robinson et al. Aug 2001 B1
6295572 Wu Sep 2001 B1
6297966 Lee et al. Oct 2001 B1
6298426 Ajanovic Oct 2001 B1
6304511 Gans et al. Oct 2001 B1
6307769 Nuxoll et al. Oct 2001 B1
6314051 Farmwald et al. Nov 2001 B1
6317352 Halbert et al. Nov 2001 B1
6317381 Gans et al. Nov 2001 B1
6324120 Farmwald et al. Nov 2001 B2
6326810 Keeth Dec 2001 B1
6327664 Dell et al. Dec 2001 B1
6330683 Jeddeloh Dec 2001 B1
6336174 Li et al. Jan 2002 B1
6338108 Motomura Jan 2002 B1
6338113 Kubo et al. Jan 2002 B1
6341347 Joy et al. Jan 2002 B1
6343019 Jiang et al. Jan 2002 B1
6343042 Tsern et al. Jan 2002 B1
6353561 Funyu et al. Mar 2002 B1
6356105 Volk Mar 2002 B1
6356500 Cloud et al. Mar 2002 B1
6362656 Rhee Mar 2002 B2
6363031 Phelan Mar 2002 B2
6378020 Farmwald et al. Apr 2002 B2
6381188 Choi et al. Apr 2002 B1
6381668 Lunteren Apr 2002 B1
6389514 Rokicki May 2002 B1
6392304 Butler May 2002 B1
6414868 Wong et al. Jul 2002 B1
6418034 Weber et al. Jul 2002 B1
6421754 Kau et al. Jul 2002 B1
6424532 Kawamura Jul 2002 B2
6426916 Farmwald et al. Jul 2002 B2
6429029 Eldridge et al. Aug 2002 B1
6430103 Nakayama et al. Aug 2002 B2
6434660 Lambert et al. Aug 2002 B1
6437600 Keeth Aug 2002 B1
6438057 Ruckerbauer Aug 2002 B1
6442698 Nizar Aug 2002 B2
6445591 Kwong Sep 2002 B1
6452826 Kim et al. Sep 2002 B1
6452863 Farmwald et al. Sep 2002 B2
6453400 Maesako et al. Sep 2002 B1
6453402 Jeddeloh Sep 2002 B1
6453434 Delp et al. Sep 2002 B2
6455348 Yamaguchi Sep 2002 B1
6457095 Volk Sep 2002 B1
6459651 Lee et al. Oct 2002 B1
6473831 Schade Oct 2002 B1
6476476 Glenn Nov 2002 B1
6480929 Gauthier et al. Nov 2002 B1
6487102 Halbert et al. Nov 2002 B1
6489669 Shimada et al. Dec 2002 B2
6490161 Johnson Dec 2002 B1
6492726 Quek et al. Dec 2002 B1
6493789 Ware et al. Dec 2002 B2
6496440 Manning Dec 2002 B2
6496897 Ware et al. Dec 2002 B2
6498766 Lee et al. Dec 2002 B2
6510097 Fukuyama Jan 2003 B2
6510503 Gillingham et al. Jan 2003 B2
6512392 Fleury et al. Jan 2003 B2
6521984 Matsuura Feb 2003 B2
6526471 Shimomura et al. Feb 2003 B1
6526473 Kim Feb 2003 B1
6526484 Stacovsky et al. Feb 2003 B1
6545895 Li et al. Apr 2003 B1
6546446 Farmwald et al. Apr 2003 B2
6553450 Dodd et al. Apr 2003 B1
6560158 Choi et al. May 2003 B2
6563337 Dour May 2003 B2
6563759 Yahata et al. May 2003 B2
6564281 Farmwald et al. May 2003 B2
6564285 Mills et al. May 2003 B1
6574150 Suyama et al. Jun 2003 B2
6584037 Farmwald et al. Jun 2003 B2
6587912 Leddige et al. Jul 2003 B2
6590822 Hwang et al. Jul 2003 B2
6594770 Sato et al. Jul 2003 B1
6597616 Tsern et al. Jul 2003 B2
6597617 Ooishi et al. Jul 2003 B2
6614700 Dietrich et al. Sep 2003 B2
6618267 Dalal et al. Sep 2003 B1
6618791 Dodd et al. Sep 2003 B1
6621760 Ahmad et al. Sep 2003 B1
6628538 Funaba et al. Sep 2003 B2
6630729 Huang Oct 2003 B2
6631086 Bill et al. Oct 2003 B1
6639820 Khandekar et al. Oct 2003 B1
6646939 Kwak Nov 2003 B2
6650588 Yamagata Nov 2003 B2
6650594 Lee et al. Nov 2003 B1
6657634 Sinclair et al. Dec 2003 B1
6657918 Foss et al. Dec 2003 B2
6657919 Foss et al. Dec 2003 B2
6658016 Dai et al. Dec 2003 B1
6658530 Robertson et al. Dec 2003 B1
6659512 Harper et al. Dec 2003 B1
6664625 Hiruma Dec 2003 B2
6665224 Lehmann et al. Dec 2003 B1
6665227 Fetzer Dec 2003 B2
6668242 Reynov et al. Dec 2003 B1
6674154 Minamio et al. Jan 2004 B2
6683372 Wong et al. Jan 2004 B1
6684292 Piccirillo et al. Jan 2004 B2
6690191 Wu et al. Feb 2004 B2
6697295 Farmwald et al. Feb 2004 B2
6701446 Tsern et al. Mar 2004 B2
6705877 Li et al. Mar 2004 B1
6708144 Merryman et al. Mar 2004 B1
6710430 Minamio et al. Mar 2004 B2
6711043 Friedman et al. Mar 2004 B2
6713856 Tsai et al. Mar 2004 B2
6714433 Doblar et al. Mar 2004 B2
6714891 Dendinger Mar 2004 B2
6724684 Kim Apr 2004 B2
6730540 Siniaguine May 2004 B2
6731009 Jones et al. May 2004 B1
6731527 Brown May 2004 B2
6742098 Halbert et al. May 2004 B1
6744687 Koo et al. Jun 2004 B2
6747887 Halbert et al. Jun 2004 B2
6751113 Bhakta et al. Jun 2004 B2
6751696 Farmwald et al. Jun 2004 B2
6754129 Khateri et al. Jun 2004 B2
6754132 Kyung Jun 2004 B2
6757751 Gene Jun 2004 B1
6762948 Kyun et al. Jul 2004 B2
6765812 Anderson Jul 2004 B2
6766469 Larson et al. Jul 2004 B2
6771526 LaBerge Aug 2004 B2
6772359 Kwak et al. Aug 2004 B2
6779097 Gillingham et al. Aug 2004 B2
6785767 Coulson Aug 2004 B2
6791877 Miura et al. Sep 2004 B2
6795899 Dodd et al. Sep 2004 B2
6799241 Kahn et al. Sep 2004 B2
6801989 Johnson et al. Oct 2004 B2
6807598 Farmwald et al. Oct 2004 B2
6807650 Lamb et al. Oct 2004 B2
6807655 Rehani et al. Oct 2004 B1
6810475 Tardieux Oct 2004 B1
6816991 Sanghani Nov 2004 B2
6819602 Seo et al. Nov 2004 B2
6819617 Hwang et al. Nov 2004 B2
6820163 McCall et al. Nov 2004 B1
6820169 Wilcox et al. Nov 2004 B2
6826104 Kawaguchi et al. Nov 2004 B2
6839290 Ahmad et al. Jan 2005 B2
6844754 Yamagata Jan 2005 B2
6845027 Mayer et al. Jan 2005 B2
6845055 Koga et al. Jan 2005 B1
6847582 Pan Jan 2005 B2
6850449 Takahashi Feb 2005 B2
6854043 Hargis et al. Feb 2005 B2
6862202 Schaefer Mar 2005 B2
6862249 Kyung Mar 2005 B2
6862653 Dodd et al. Mar 2005 B1
6873534 Bhakta et al. Mar 2005 B2
6878570 Lyu et al. Apr 2005 B2
6894933 Kuzmenka et al. May 2005 B2
6898683 Nakamura May 2005 B2
6908314 Brown Jun 2005 B2
6912778 Ahn et al. Jul 2005 B2
6914786 Paulsen et al. Jul 2005 B1
6917219 New Jul 2005 B2
6922371 Takahashi et al. Jul 2005 B2
6930900 Bhakta et al. Aug 2005 B2
6930903 Bhakta et al. Aug 2005 B2
6938119 Kohn et al. Aug 2005 B2
6943450 Fee et al. Sep 2005 B2
6944748 Sanches et al. Sep 2005 B2
6947341 Stubbs et al. Sep 2005 B2
6951982 Chye et al. Oct 2005 B2
6952794 Lu Oct 2005 B2
6961281 Wong et al. Nov 2005 B2
6968416 Moy Nov 2005 B2
6968419 Holman Nov 2005 B1
6970968 Holman Nov 2005 B1
6980021 Srivastava et al. Dec 2005 B1
6986118 Dickman Jan 2006 B2
6992501 Rapport Jan 2006 B2
6992950 Foss et al. Jan 2006 B2
7000062 Perego et al. Feb 2006 B2
7003618 Perego et al. Feb 2006 B2
7003639 Tsern et al. Feb 2006 B2
7007095 Chen et al. Feb 2006 B2
7007175 Chang et al. Feb 2006 B2
7010642 Perego et al. Mar 2006 B2
7010736 Teh et al. Mar 2006 B1
7024518 Halbert et al. Apr 2006 B2
7026708 Cady et al. Apr 2006 B2
7028215 Depew et al. Apr 2006 B2
7028234 Huckaby et al. Apr 2006 B2
7033861 Partridge et al. Apr 2006 B1
7035150 Streif et al. Apr 2006 B2
7043599 Ware et al. May 2006 B1
7043611 McClannahan et al. May 2006 B2
7045396 Crowley et al. May 2006 B2
7045901 Lin et al. May 2006 B2
7046538 Kinsley et al. May 2006 B2
7053470 Sellers et al. May 2006 B1
7053478 Roper et al. May 2006 B2
7058776 Lee Jun 2006 B2
7058863 Kouchi et al. Jun 2006 B2
7061784 Jakobs et al. Jun 2006 B2
7061823 Faue et al. Jun 2006 B2
7066741 Burns et al. Jun 2006 B2
7075175 Kazi et al. Jul 2006 B2
7079396 Gates et al. Jul 2006 B2
7079441 Partsch et al. Jul 2006 B1
7079446 Murtagh et al. Jul 2006 B2
7085152 Ellis et al. Aug 2006 B2
7085941 Li Aug 2006 B2
7089438 Raad Aug 2006 B2
7093101 Aasheim et al. Aug 2006 B2
7103730 Saxena et al. Sep 2006 B2
7110322 Farmwald et al. Sep 2006 B2
7111143 Walker Sep 2006 B2
7117309 Bearden Oct 2006 B2
7119428 Tanie et al. Oct 2006 B2
7120727 Lee et al. Oct 2006 B2
7126399 Lee Oct 2006 B1
7127567 Ramakrishnan et al. Oct 2006 B2
7133960 Thompson et al. Nov 2006 B1
7136978 Miura et al. Nov 2006 B2
7138823 Janzen et al. Nov 2006 B2
7149145 Kim et al. Dec 2006 B2
7149824 Johnson Dec 2006 B2
7173863 Conley et al. Feb 2007 B2
7200021 Raghuram Apr 2007 B2
7205789 Karabatsos Apr 2007 B1
7210059 Jeddoloh Apr 2007 B2
7215561 Park et al. May 2007 B2
7218566 Totolos, Jr. et al. May 2007 B1
7224595 Dreps et al. May 2007 B2
7228264 Barrenscheen et al. Jun 2007 B2
7231562 Ohlhoff et al. Jun 2007 B2
7233541 Yamamoto et al. Jun 2007 B2
7234081 Nguyen et al. Jun 2007 B2
7243185 See et al. Jul 2007 B2
7245541 Janzen Jul 2007 B2
7254036 Pauley et al. Aug 2007 B2
7266639 Raghuram Sep 2007 B2
7269042 Kinsley et al. Sep 2007 B2
7269708 Ware Sep 2007 B2
7274583 Park et al. Sep 2007 B2
7277333 Schaefer Oct 2007 B2
7286436 Bhakta et al. Oct 2007 B2
7289386 Bhakta et al. Oct 2007 B2
7296754 Nishizawa et al. Nov 2007 B2
7299330 Gillingham et al. Nov 2007 B2
7302598 Suzuki et al. Nov 2007 B2
7307863 Yen et al. Dec 2007 B2
7317250 Koh et al. Jan 2008 B2
7327613 Lee Feb 2008 B2
7336490 Harris et al. Feb 2008 B2
7337293 Brittain et al. Feb 2008 B2
7363422 Perego et al. Apr 2008 B2
7366947 Gower et al. Apr 2008 B2
7379316 Rajan May 2008 B2
7386656 Rajan et al. Jun 2008 B2
7392338 Rajan et al. Jun 2008 B2
7408393 Jain et al. Aug 2008 B1
7409492 Tanaka et al. Aug 2008 B2
7414917 Ruckerbauer et al. Aug 2008 B2
7428644 Jeddeloh et al. Sep 2008 B2
7437579 Jeddeloh et al. Oct 2008 B2
7441064 Gaskins Oct 2008 B2
7457122 Lai et al. Nov 2008 B2
7464225 Tsern Dec 2008 B2
7472220 Rajan et al. Dec 2008 B2
7474576 Co et al. Jan 2009 B2
7480147 Hoss et al. Jan 2009 B2
7480774 Ellis et al. Jan 2009 B2
7496777 Kapil Feb 2009 B2
7499281 Harris et al. Mar 2009 B2
7515453 Rajan Apr 2009 B2
7532537 Solomon et al. May 2009 B2
7539800 Dell et al. May 2009 B2
7573136 Jiang et al. Aug 2009 B2
7580312 Rajan et al. Aug 2009 B2
7581121 Barth et al. Aug 2009 B2
7581127 Rajan et al. Aug 2009 B2
7590796 Rajan et al. Sep 2009 B2
7599205 Rajan Oct 2009 B2
7606245 Ma et al. Oct 2009 B2
7609567 Rajan et al. Oct 2009 B2
7613880 Miura et al. Nov 2009 B2
7619912 Bhakta et al. Nov 2009 B2
7724589 Rajan et al. May 2010 B2
7730338 Rajan et al. Jun 2010 B2
7738252 Schuette et al. Jun 2010 B2
7761724 Rajan et al. Jul 2010 B2
7791889 Belady et al. Sep 2010 B2
7911798 Chang et al. Mar 2011 B2
7934070 Brittain et al. Apr 2011 B2
7990797 Moshayedi et al. Aug 2011 B2
8116144 Shaw et al. Feb 2012 B2
20010000822 Dell et al. May 2001 A1
20010003198 Wu Jun 2001 A1
20010011322 Stolt et al. Aug 2001 A1
20010019509 Aho et al. Sep 2001 A1
20010021106 Weber et al. Sep 2001 A1
20010021137 Kai et al. Sep 2001 A1
20010046129 Broglia et al. Nov 2001 A1
20010046163 Yanagawa Nov 2001 A1
20010052062 Lipovski Dec 2001 A1
20020002662 Olarig et al. Jan 2002 A1
20020004897 Kao et al. Jan 2002 A1
20020015340 Batinovich Feb 2002 A1
20020019961 Blodgett Feb 2002 A1
20020034068 Weber et al. Mar 2002 A1
20020038405 Leddige et al. Mar 2002 A1
20020040416 Tsern et al. Apr 2002 A1
20020041507 Woo et al. Apr 2002 A1
20020051398 Mizugaki May 2002 A1
20020060945 Ikeda May 2002 A1
20020060948 Chang et al. May 2002 A1
20020064073 Chien May 2002 A1
20020064083 Ryu et al. May 2002 A1
20020089831 Forthun Jul 2002 A1
20020089970 Asada et al. Jul 2002 A1
20020094671 Distefano et al. Jul 2002 A1
20020121650 Minamio et al. Sep 2002 A1
20020121670 Minamio et al. Sep 2002 A1
20020124195 Nizar Sep 2002 A1
20020129204 Leighnor et al. Sep 2002 A1
20020145900 Schaefer Oct 2002 A1
20020165706 Raynham Nov 2002 A1
20020167092 Fee et al. Nov 2002 A1
20020172024 Hui et al. Nov 2002 A1
20020174274 Wu et al. Nov 2002 A1
20020184438 Usui Dec 2002 A1
20030002262 Benisek et al. Jan 2003 A1
20030011993 Summers et al. Jan 2003 A1
20030016550 Yoo et al. Jan 2003 A1
20030021175 Tae Kwak Jan 2003 A1
20030026155 Yamagata Feb 2003 A1
20030026159 Frankowsky et al. Feb 2003 A1
20030035312 Halbert et al. Feb 2003 A1
20030039158 Horiguchi et al. Feb 2003 A1
20030041295 Hou et al. Feb 2003 A1
20030061458 Wilcox et al. Mar 2003 A1
20030061459 Aboulenein et al. Mar 2003 A1
20030083855 Fukuyama May 2003 A1
20030088743 Rader May 2003 A1
20030093614 Kohn et al. May 2003 A1
20030101392 Lee May 2003 A1
20030105932 David et al. Jun 2003 A1
20030110339 Calvignac et al. Jun 2003 A1
20030117875 Lee et al. Jun 2003 A1
20030123389 Russell et al. Jul 2003 A1
20030126338 Dodd et al. Jul 2003 A1
20030127737 Takahashi Jul 2003 A1
20030131160 Hampel et al. Jul 2003 A1
20030145163 Seo et al. Jul 2003 A1
20030158995 Lee et al. Aug 2003 A1
20030164539 Yau Sep 2003 A1
20030164543 Kheng Lee Sep 2003 A1
20030174569 Amidi Sep 2003 A1
20030182513 Dodd et al. Sep 2003 A1
20030183934 Barrett Oct 2003 A1
20030189868 Riesenman et al. Oct 2003 A1
20030189870 Wilcox Oct 2003 A1
20030191888 Klein Oct 2003 A1
20030191915 Saxena et al. Oct 2003 A1
20030200382 Wells et al. Oct 2003 A1
20030200474 Li Oct 2003 A1
20030205802 Segaram Nov 2003 A1
20030206476 Joo Nov 2003 A1
20030217303 Chua-Eoan et al. Nov 2003 A1
20030223290 Park et al. Dec 2003 A1
20030227798 Pax Dec 2003 A1
20030229821 Ma Dec 2003 A1
20030230801 Jiang et al. Dec 2003 A1
20030231540 Lazar et al. Dec 2003 A1
20030231542 Zaharinova-Papazova et al. Dec 2003 A1
20030234664 Yamagata Dec 2003 A1
20040016994 Huang Jan 2004 A1
20040027902 Ooishi et al. Feb 2004 A1
20040034732 Valin et al. Feb 2004 A1
20040034755 LaBerge et al. Feb 2004 A1
20040037133 Park et al. Feb 2004 A1
20040042503 Shaeffer et al. Mar 2004 A1
20040044808 Salmon et al. Mar 2004 A1
20040047228 Chen Mar 2004 A1
20040049624 Salmonsen Mar 2004 A1
20040057317 Schaefer Mar 2004 A1
20040064647 DeWhitt et al. Apr 2004 A1
20040064767 Huckaby et al. Apr 2004 A1
20040083324 Rabinovitz et al. Apr 2004 A1
20040088475 Streif et al. May 2004 A1
20040100837 Lee May 2004 A1
20040117723 Foss Jun 2004 A1
20040123173 Emberling et al. Jun 2004 A1
20040125635 Kuzmenka Jul 2004 A1
20040133736 Kyung Jul 2004 A1
20040139359 Samson et al. Jul 2004 A1
20040145963 Byon Jul 2004 A1
20040151038 Ruckerbauer et al. Aug 2004 A1
20040174765 Seo et al. Sep 2004 A1
20040177079 Gluhovsky et al. Sep 2004 A1
20040178824 Pan Sep 2004 A1
20040184324 Pax Sep 2004 A1
20040186956 Perego et al. Sep 2004 A1
20040188704 Halbert et al. Sep 2004 A1
20040195682 Kimura Oct 2004 A1
20040196732 Lee Oct 2004 A1
20040205433 Gower et al. Oct 2004 A1
20040208173 Di Gregorio Oct 2004 A1
20040225858 Brueggen Nov 2004 A1
20040228166 Braun et al. Nov 2004 A1
20040228196 Kwak et al. Nov 2004 A1
20040228203 Koo Nov 2004 A1
20040230932 Dickmann Nov 2004 A1
20040236877 Burton Nov 2004 A1
20040250989 Im et al. Dec 2004 A1
20040256638 Perego et al. Dec 2004 A1
20040257847 Matsui et al. Dec 2004 A1
20040257857 Yamamoto et al. Dec 2004 A1
20040260957 Jeddeloh et al. Dec 2004 A1
20040264255 Royer Dec 2004 A1
20040268161 Ross Dec 2004 A1
20050018495 Bhakta et al. Jan 2005 A1
20050021874 Georgiou et al. Jan 2005 A1
20050024963 Jakobs et al. Feb 2005 A1
20050027928 Avraham et al. Feb 2005 A1
20050028038 Pomaranski et al. Feb 2005 A1
20050036350 So et al. Feb 2005 A1
20050041504 Perego et al. Feb 2005 A1
20050044302 Pauley et al. Feb 2005 A1
20050044303 Perego et al. Feb 2005 A1
20050044305 Jakobs et al. Feb 2005 A1
20050047192 Matsui et al. Mar 2005 A1
20050071543 Ellis et al. Mar 2005 A1
20050078532 Ruckerbauer et al. Apr 2005 A1
20050081085 Ellis et al. Apr 2005 A1
20050086548 Haid et al. Apr 2005 A1
20050099834 Funaba et al. May 2005 A1
20050102590 Norris et al. May 2005 A1
20050105318 Funaba et al. May 2005 A1
20050108460 David May 2005 A1
20050127531 Tay et al. Jun 2005 A1
20050132158 Hampel et al. Jun 2005 A1
20050135176 Ramakrishnan et al. Jun 2005 A1
20050138267 Bains et al. Jun 2005 A1
20050138304 Ramakrishnan et al. Jun 2005 A1
20050139977 Nishio et al. Jun 2005 A1
20050141199 Chiou et al. Jun 2005 A1
20050149662 Perego et al. Jul 2005 A1
20050152212 Yang et al. Jul 2005 A1
20050156934 Perego et al. Jul 2005 A1
20050166026 Ware et al. Jul 2005 A1
20050193163 Perego et al. Sep 2005 A1
20050193183 Barth et al. Sep 2005 A1
20050194676 Fukuda et al. Sep 2005 A1
20050194991 Dour et al. Sep 2005 A1
20050195629 Leddige et al. Sep 2005 A1
20050201063 Lee et al. Sep 2005 A1
20050204111 Natarajan Sep 2005 A1
20050207255 Perego et al. Sep 2005 A1
20050210196 Perego et al. Sep 2005 A1
20050223179 Perego et al. Oct 2005 A1
20050224948 Lee et al. Oct 2005 A1
20050232049 Park Oct 2005 A1
20050235119 Sechrest et al. Oct 2005 A1
20050235131 Ware Oct 2005 A1
20050237838 Kwak et al. Oct 2005 A1
20050243635 Schaefer Nov 2005 A1
20050246558 Ku Nov 2005 A1
20050249011 Maeda Nov 2005 A1
20050259504 Murtugh et al. Nov 2005 A1
20050263312 Bolken et al. Dec 2005 A1
20050265506 Foss et al. Dec 2005 A1
20050269715 Yoo Dec 2005 A1
20050278474 Perersen et al. Dec 2005 A1
20050281096 Bhakta et al. Dec 2005 A1
20050281123 Bell et al. Dec 2005 A1
20050283572 Ishihara Dec 2005 A1
20050285174 Saito et al. Dec 2005 A1
20050286334 Saito et al. Dec 2005 A1
20050289292 Morrow et al. Dec 2005 A1
20050289317 Liou et al. Dec 2005 A1
20060002201 Janzen Jan 2006 A1
20060010339 Klein Jan 2006 A1
20060026484 Hollums Feb 2006 A1
20060038597 Becker et al. Feb 2006 A1
20060039204 Cornelius Feb 2006 A1
20060039205 Cornelius Feb 2006 A1
20060041711 Miura et al. Feb 2006 A1
20060041730 Larson Feb 2006 A1
20060044909 Kinsley et al. Mar 2006 A1
20060044913 Klein et al. Mar 2006 A1
20060049502 Goodwin et al. Mar 2006 A1
20060050574 Streif et al. Mar 2006 A1
20060056244 Ware Mar 2006 A1
20060062047 Bhakta et al. Mar 2006 A1
20060067141 Perego et al. Mar 2006 A1
20060085616 Zeighami et al. Apr 2006 A1
20060087900 Bucksch et al. Apr 2006 A1
20060090031 Kirshenbaum et al. Apr 2006 A1
20060090054 Choi et al. Apr 2006 A1
20060106951 Bains May 2006 A1
20060112214 Yeh May 2006 A1
20060112219 Chawla et al. May 2006 A1
20060117152 Amidi et al. Jun 2006 A1
20060117160 Jackson et al. Jun 2006 A1
20060118933 Haba Jun 2006 A1
20060120193 Casper Jun 2006 A1
20060123265 Ruckerbauer et al. Jun 2006 A1
20060126369 Raghuram Jun 2006 A1
20060129712 Raghuram Jun 2006 A1
20060129740 Ruckerbauer et al. Jun 2006 A1
20060129755 Raghuram Jun 2006 A1
20060133173 Jain et al. Jun 2006 A1
20060136791 Nierle Jun 2006 A1
20060149857 Holman Jul 2006 A1
20060149982 Vogt Jul 2006 A1
20060174082 Bellows et al. Aug 2006 A1
20060176744 Stave Aug 2006 A1
20060179262 Brittain et al. Aug 2006 A1
20060179333 Brittain et al. Aug 2006 A1
20060179334 Brittain et al. Aug 2006 A1
20060180926 Mullen et al. Aug 2006 A1
20060181953 Rotenberg et al. Aug 2006 A1
20060195631 Rajamani Aug 2006 A1
20060198178 Kinsley et al. Sep 2006 A1
20060203590 Mori et al. Sep 2006 A1
20060206738 Jeddeloh et al. Sep 2006 A1
20060233012 Sekiguchi et al. Oct 2006 A1
20060236165 Cepulis et al. Oct 2006 A1
20060236201 Gower et al. Oct 2006 A1
20060248261 Jacob et al. Nov 2006 A1
20060248387 Nicholson et al. Nov 2006 A1
20060262586 Solomon et al. Nov 2006 A1
20060262587 Matsui et al. Nov 2006 A1
20060277355 Ellsberry et al. Dec 2006 A1
20060294295 Fukuzo Dec 2006 A1
20070005998 Jain et al. Jan 2007 A1
20070050530 Rajan Mar 2007 A1
20070058471 Rajan et al. Mar 2007 A1
20070070669 Tsern et al. Mar 2007 A1
20070088995 Tsern et al. Apr 2007 A1
20070091696 Niggemeier et al. Apr 2007 A1
20070106860 Foster, Sr. et al. May 2007 A1
20070136537 Doblar et al. Jun 2007 A1
20070152313 Periaman et al. Jul 2007 A1
20070162700 Fortin et al. Jul 2007 A1
20070188997 Hockanson et al. Aug 2007 A1
20070192563 Rajan et al. Aug 2007 A1
20070195613 Rajan et al. Aug 2007 A1
20070204075 Rajan et al. Aug 2007 A1
20070216445 Raghavan et al. Sep 2007 A1
20070247194 Jain Oct 2007 A1
20070279084 Oh et al. Dec 2007 A1
20070285895 Gruendler et al. Dec 2007 A1
20070288683 Panabaker et al. Dec 2007 A1
20070288686 Arcedera et al. Dec 2007 A1
20070288687 Panabaker et al. Dec 2007 A1
20080002447 Gulachenski et al. Jan 2008 A1
20080010435 Smith et al. Jan 2008 A1
20080025108 Rajan et al. Jan 2008 A1
20080025122 Schakel et al. Jan 2008 A1
20080025136 Rajan et al. Jan 2008 A1
20080025137 Rajan et al. Jan 2008 A1
20080027697 Rajan et al. Jan 2008 A1
20080027702 Rajan et al. Jan 2008 A1
20080027703 Rajan et al. Jan 2008 A1
20080028135 Rajan et al. Jan 2008 A1
20080028136 Schakel et al. Jan 2008 A1
20080028137 Schakel et al. Jan 2008 A1
20080031030 Rajan et al. Feb 2008 A1
20080031072 Rajan et al. Feb 2008 A1
20080034130 Perego et al. Feb 2008 A1
20080037353 Rajan et al. Feb 2008 A1
20080056014 Rajan et al. Mar 2008 A1
20080062773 Rajan et al. Mar 2008 A1
20080065820 Gillingham et al. Mar 2008 A1
20080082763 Rajan et al. Apr 2008 A1
20080086588 Danilak et al. Apr 2008 A1
20080089034 Hoss et al. Apr 2008 A1
20080098277 Hazelzet Apr 2008 A1
20080103753 Rajan et al. May 2008 A1
20080104314 Rajan et al. May 2008 A1
20080109206 Rajan et al. May 2008 A1
20080109595 Rajan et al. May 2008 A1
20080109597 Schakel et al. May 2008 A1
20080109598 Schakel et al. May 2008 A1
20080115006 Smith et al. May 2008 A1
20080120443 Rajan et al. May 2008 A1
20080120458 Gillingham et al. May 2008 A1
20080123459 Rajan et al. May 2008 A1
20080126624 Prete et al. May 2008 A1
20080126687 Rajan et al. May 2008 A1
20080126688 Rajan et al. May 2008 A1
20080126689 Rajan et al. May 2008 A1
20080126690 Rajan et al. May 2008 A1
20080126692 Rajan et al. May 2008 A1
20080130364 Guterman et al. Jun 2008 A1
20080133825 Rajan et al. Jun 2008 A1
20080155136 Hishino Jun 2008 A1
20080159027 Kim Jul 2008 A1
20080170425 Rajan Jul 2008 A1
20080195894 Schreck et al. Aug 2008 A1
20080215832 Allen et al. Sep 2008 A1
20080239857 Rajan et al. Oct 2008 A1
20080239858 Rajan et al. Oct 2008 A1
20080256282 Guo et al. Oct 2008 A1
20080282084 Hatakeyama Nov 2008 A1
20080282341 Hatakeyama Nov 2008 A1
20090024789 Rajan et al. Jan 2009 A1
20090024790 Rajan et al. Jan 2009 A1
20090049266 Kuhne Feb 2009 A1
20090063865 Berenbaum et al. Mar 2009 A1
20090063896 Lastras-Montano et al. Mar 2009 A1
20090070520 Mizushima Mar 2009 A1
20090089480 Wah et al. Apr 2009 A1
20090109613 Legen et al. Apr 2009 A1
20090180926 Petruno et al. Jul 2009 A1
20090216939 Smith et al. Aug 2009 A1
20090285031 Rajan et al. Nov 2009 A1
20090290442 Rajan Nov 2009 A1
20100005218 Gower et al. Jan 2010 A1
20100020585 Rajan Jan 2010 A1
20100257304 Rajan et al. Oct 2010 A1
20100271888 Rajan Oct 2010 A1
20100281280 Rajan et al. Nov 2010 A1
Foreign Referenced Citations (38)
Number Date Country
102004051345 May 2006 DE
102004053316 May 2006 DE
102005036528 Feb 2007 DE
0644547 Mar 1995 EP
62121978 Jun 1987 JP
01171047 Jul 1989 JP
03-029357 Feb 1991 JP
03029357 Feb 1991 JP
03276487 Dec 1991 JP
03286234 Dec 1991 JP
2005-298192 Nov 1993 JP
07-141870 Jun 1995 JP
08077097 Mar 1996 JP
08077097 Mar 1996 JP
11-149775 Jun 1999 JP
2002025255 Jan 2002 JP
3304893 May 2002 JP
2004-327474 Nov 2004 JP
2006236388 Sep 2006 JP
1020040062717 Jul 2004 KR
2005120344 Dec 2005 KR
PCT-US94-09186 Feb 1995 WO
WO 9505676 Feb 1995 WO
WO9725674 Jul 1997 WO
WO9900734 Jan 1999 WO
WO0045270 Aug 2000 WO
WO0190900 Nov 2001 WO
WO0197160 Dec 2001 WO
WO2004044754 May 2004 WO
WO2004051645 Jun 2004 WO
WO2006072040 Jul 2006 WO
WO2007002324 Jan 2007 WO
WO2007028109 Mar 2007 WO
2007038225 Apr 2007 WO
WO 2007038225 Apr 2007 WO
WO 2007038225 Apr 2007 WO
WO2007095080 Aug 2007 WO
WO 2008063251 May 2008 WO
Non-Patent Literature Citations (305)
Entry
Wu et al., “eNVy: A Non-Volatile, Main Memory Storage System,” to appear in ASPLOS VI.
“Using Two Chip Selects to Enable Quad Rank,” IP.com PriorArtDatabase, copyright IP.com, Inc. 2004.
Skerlj et al., “Buffer Device for Memory Modules (DIMM)” Qimonda 2006, p. 1.
Written Opinion from PCT Application No. PCT/US06/24360 mailed on Jan. 8, 2007.
Preliminary Report On Patentability from PCT Application No. PCT/US06/24360 mailed on Jan. 10, 2008.
Written Opinion from International PCT Application No. PCT/US06/34390 mailed on Nov. 21, 2007.
International Search Report from PCT Application No. PCT/US06/34390 mailed on Nov. 21, 2007.
International Search Report for related foreign application PCT/US06/34390.
Written Opinion of the International Searching Authority for related foreign application PCT/US06/34390.
Non-final Office Action from U.S. Appl. No. 11/461,430 mailed on Feb. 19, 2009.
Final Office Action from U.S. Appl. No. 11/461,435 mailed on Jan. 28, 2009.
Non-final Office Action from U.S. Appl. No. 11/461,437 mailed on Jan. 26, 2009.
Non-final Office Action from U.S. Appl. No. 11/939,432 mailed on Feb. 6, 2009.
Wu et al., “eNVy: A Non-Volatile, Main Memory Storage System,” ASPLO-VI Proceedings—Sixth International Conference on Architectural Support for Programming Languages and Operating Systems, San Jose, California, Oct. 4-7, 1994. SIGARCH Computer Architecture News 22(Special Issue Oct. 1994).
Written Opinion of the International Searching Authority for Related Foreign Application PCT/US2006/024360.
International Search Report for Related Foreign Application PCT/US2006/024360.
Office Action from U.S. Appl. No. 11/461,427 mailed on Sep. 5, 2008.
Final Office Action from U.S. Appl. No. 11/461,430 mailed on Sep. 8, 2008.
Notice of Allowance from U.S. Appl. No. 11/474,075 mailed on Nov. 26, 2008.
Office Action from U.S. Appl. No. 11/474,076 mailed on Nov. 3, 2008.
Office Action from U.S. Appl. No. 11/524,811 mailed on Sep. 17, 2008.
German Office Action From German Patent Application No. 11 2006 002 300.4-55 Mailed Jun. 5, 2009 (With Translation).
Non-Final Office Action From U.S. Appl. No. 11/461,430 Mailed Feb. 19, 2009.
Final Office Action From U.S. Appl. No. 11/461,435 Mailed Jan. 28, 2009.
Non-Final Office Action From U.S. Appl. No. 11/461,437 Mailed Jan. 26, 2009.
Non-Final Office Action From U.S. Appl. No. 11/461,441 Mailed Apr. 2, 2009.
Non-Final Office Action From U.S. Appl. No. 11/611,374 Mailed Mar. 23, 2009.
Non-Final Office Action From U.S. Appl. No. 11/762,010 Mailed Mar. 20, 2009.
Non-Final Office Action From U.S. Appl. No. 11/939,432 Mailed Feb. 6, 2009.
Non-Final Office Action From U.S. Appl. No. 12/111,819 Mailed Apr. 27, 2009.
Non-Final Office Action From U.S. Appl. No. 12/111,828 Mailed Apr. 17, 2009.
Final Rejection From U.S. Appl. No. 11/461,437 Mailed Nov. 10, 2009.
Final Rejection from U.S. Appl. No. 11/762,010 Mailed Dec. 4, 2009.
Non-Final Rejection from U.S. Appl. No. 11/672,921 Mailed Dec. 8, 2009.
Non-Final Rejection from U.S. Appl. No. 11/672,924 Mailed Dec. 14, 2009.
Non-Final Rejection from U.S. Appl. No. 11/929,225 Mailed Dec. 14, 2009.
Non-Final Rejection from U.S. Appl. No. 11/929,261 Mailed Dec. 14, 2009.
Notice of Allowance From U.S. Appl. No. 11/611,374 Mailed Nov. 30, 2009.
Notice of Allowance From U.S. Appl. No. 11/939,432 Mailed Dec. 1, 2009.
Notice of Allowance From U.S. Appl. No. 12/111,819 Mailed Nov. 20, 2009.
Notice of Allowance From U.S. Appl. No. 12/111,828 Mailed Dec. 15, 2009.
Great Britain Office Action from GB Patent Application No. GB0800734.6 Mailed Mar. 1, 2010.
Final Office Action from U.S. Appl. No. 11/461,420 Mailed Apr. 28, 2010.
Notice of Allowance from U.S. Appl. No. 11/553,372 Mailed Mar. 12, 2010.
Notice of Allowance from U.S. Appl. No. 11/553,399 Mailed Mar. 22, 2010.
Non-Final Office Action from U.S. Appl. No. 11/588,739 Mailed Dec. 29, 2009.
Notice of Allowance from U.S. Appl. No. 11/611,374 Mailed Apr. 5, 2010.
Non-Final Office Action from U.S. Appl. No. 11/828,181 Mailed Mar. 2, 2010.
Non-Final Office Action from U.S. Appl. No. 11/828,182 Mailed Mar. 29, 2010.
Final Office Action from U.S. Appl. No. 11/858,518 Mailed Apr. 21, 2010.
Non-Final Office Action from U.S. Appl. No. 11/929,432 Mailed Jan. 14, 2010.
Non-Final Office Action from U.S. Appl. No. 11/929,571 Mailed Mar. 3, 2010.
Non-Final Office Action from U.S. Appl. No. 11/929,631 Mailed Mar. 3, 2010.
Non-Final Office Action from U.S. Appl. No. 11/929,636 Mailed Mar. 9, 2010.
Non-Final Office Action from U.S. Appl. No. 11/929,655 Mailed Mar. 3, 2010.
Non-Final Office Action from U.S. Appl. No. 11/939,432 Mailed Apr. 12, 2010.
Notice of Allowance from U.S. Appl. No. 12/111,819 Mailed Mar. 10, 2010.
Non-Final Office Action from U.S. Appl. No. 12/507,682 Mailed Mar. 8, 2010.
Examination Report from GB Intellectual Property Office Dated Mar. 1, 2010.
Final Office Action from U.S. Appl. No. 11/461,435 Dated May 13, 2010.
Final Office Action from U.S. Appl. No. 11/515,167 Dated Jun. 3, 2010.
Final Office Action from U.S. Appl. No. 11/553,390 Dated Jun. 24, 2010.
Notice of Allowance from U.S. Appl. No. 11/611,374 Dated Jul. 19, 2010.
Final Office Action from U.S. Appl. No. 11/672,921 Dated Jul. 23, 2010.
Final Office Action from U.S. Appl. No. 11/702,960 Dated Jun. 21, 2010.
Notice of Allowance from U.S. Appl. No. 11/762,010 Dated Jul. 2, 2010.
Notice of Allowance from U.S. Appl. No. 11/763,365 Dated Jun. 29, 2010.
Final Office Action from U.S. Appl. No. 11/929,500 Dated Jun. 24, 2010.
Office Action from U.S. Appl. No. 12/574,628 Dated Jun. 10, 2010.
Notice of Allowance from U.S. Appl. No. 11/553,372 Dated Aug. 4, 2010.
Final Office Action from U.S. Appl. No. 11/672,924 Dated Sep. 7, 2010.
Notice of Allowance from U.S. Appl. No. 11/762,010 Dated Oct. 22, 2010.
Notice of Allowance from U.S. Appl. No. 11/762,013 Dated Aug. 17, 2010.
Notice of Allowance from U.S. Appl. No. 11/763,365 Dated Oct. 20, 2010.
Non-Final Office Action from U.S. Appl. No. 11/855,805 Dated Sep. 21, 2010.
Non-Final Office Action from U.S. Appl. No. 11/858,518 Dated Sep. 8, 2010.
Final Office Action from U.S. Appl. No. 11/929,225 Dated Aug. 27, 2010.
Final Office Action from U.S. Appl. No. 11/929,261 Dated Sep. 7, 2010.
Final Office Action From U.S. Appl. No. 11/929,286 Dated Aug. 20, 2010.
Notice of Allowance From U.S. Appl. No. 11/929,320 Dated Sep. 29, 2010.
Final Office Action From U.S. Appl. No. 11/929,403 Dated Aug. 31, 2010.
Final Office Action From U.S. Appl. No. 11/929,417 Dated Aug. 31, 2010.
Final Office Action From U.S. Appl. No. 11/929,432 Dated Aug. 20, 2010.
Final Office Action From U.S. Appl. No. 11/929,450 Dated Aug. 20, 2010.
Notice of Allowance From U.S. Appl. No. 11/929,483 Dated Oct. 7, 2010.
Non-Final Office Action from U.S. Appl. No. 11/939,440 Dated Sep. 17, 2010.
Notice of Allowance From U.S. Appl. No. 11/941,589 Dated Oct. 25, 2010.
Non-Final Office Action from U.S. Appl. No. 12/057,306 Dated Oct. 8, 2010.
Non-Final Office Action from U.S. Appl. No. 12/838,896 Dated Sep. 3, 2010.
Fang et al., W. Power Complexity Analysis Of Adiabatic SRAM, 6th Int. Conference On ASIC, vol. 1, Oct. 2005, pp. 334-337.
Pavan et al., P. A Complete Model Of E2PROM Memory Cells for Circuit Simulations, IEEE Transactions On Computer-Aided Design Of Integrated Circuits And Systems, vol. 22, No. 8, Aug. 2003, pp. 1072-1079.
German Office Action from German Patent Application No. 11 2006 001 810.8-55 Dated Feb. 18, 2009 (With Translation).
Non-Final Office Action from U.S. Appl. No. 11/461,420 Dated Jul. 23, 2009.
Notice of Allowance from U.S. Appl. No. 11/461,430 Dated Sep. 10, 2009.
Non-Final Office Action from U.S. Appl. No. 11/461,435 Dated Aug. 5, 2009.
Non-Final Office Action from U.S. Appl. No. 11/515,167 Dated Sep. 25, 2009.
Non-Final Office Action from U.S. Appl. No. 11/538,041 Dated Jun. 10, 2009.
Non-Final Office Action from U.S. Appl. No. 11/553,372 Dated Jun. 25, 2009.
Notice of Allowance from U.S. Appl. No. 11/553,372 Dated Sep. 30, 2009.
Non-Final Office Action from U.S. Appl. No. 11/553,390 Dated Sep. 9, 2009.
Non-Final Office Action from U.S. Appl. No. 11/553,399 Dated Jul. 7, 2009.
Notice of Allowance from U.S. Appl. No. 11/553,399 Dated Oct. 13, 2009.
Notice of Allowance from U.S. Appl. No. 11/611,374 Dated Sep. 15, 2009.
Non-Final Office Action from U.S. Appl. No. 11/702,960 Dated Sep. 25, 2009.
Non-Final Office Action from U.S. Appl. No. 11/702,981 Dated Aug. 19, 2009.
Non-Final Office Action from U.S. Appl. No. 11/762,013 Dated Jun. 5, 2009.
Non-Final Office Action from U.S. Appl. No. 11/763,365 Dated Oct. 28, 2009.
Non-Final Office Action from U.S. Appl. No. 11/858,518 Dated Aug. 14, 2009.
Non-Final Office Action from U.S. Appl. No. 11/929,500 Dated Oct. 13, 2009.
Notice of Allowance from U.S. Appl. No. 11/939,432 Dated Sep. 24, 2009.
Non-Final Office Action from U.S. Appl. No. 11/941,589 Dated Oct. 1, 2009.
Supplemental European Search Report and Search Opinion issued Sep. 21, 2009 in European Application No. 07870726.2, 8 pp.
Wu et al., “eNVy: A Non-Volatile, Main Memory Storage System”, ASPLOS-VI Proceedings, Oct. 4-7, 1994, pp. 86-97.
Buffer Device for Memory Modules (DIMM), IP.com Prior Art Database, <URL: http://ip.com/IPCOM/000144850>, Feb. 10, 2007, 1 pg.
German Office Action from German Patent Application No. 11 2006 002 300.4-55 Dated May 11, 2009 (With Translation).
Great Britain Office Action from GB Patent Application No. GB0803913.3 Dated Mar. 1, 2010.
International Preliminary Examination Report From PCT Application No. PCT/US07/016385 Dated Feb. 3, 2009.
Search Report and Written Opinion From PCT Application No. PCT/US07/03460 Dated on Feb. 14, 2008.
Notice of Allowance from U.S. Appl. No. 11/553,399 Dated Dec. 3, 2010.
Notice of Allowance from U.S. Appl. No. 11/611,374 Dated Oct. 29, 2010.
Non-Final Office Action from U.S. Appl. No. 11/702,981 Dated Mar. 11, 2009.
Notice of Allowance from U.S. Appl. No. 11/762,013 Dated Dec. 7, 2010.
Final Office Action from U.S. Appl. No. 11/929,631 Dated Nov. 18, 2010.
Final Office Action from U.S. Appl. No. 11/929,655 Dated Nov. 22, 2010.
Non-Final Office Action from U.S. Appl. No. 12/203,100 Dated Dec. 1, 2010.
Non-Final Office Action from U.S. Appl. No. 12/769,428 Dated Nov. 8, 2010.
Search Report From PCT Application No. PCT/US10/038041 Dated Aug. 23, 2010.
Non-Final Office Action from U.S. Appl. No. 11/461,437 Dated Jan. 4, 2011.
Non-Final Office Action from U.S. Appl. No. 11/553,372 Dated Jan. 5, 2011.
Final Office Action from U.S. Appl. No. 11/588,739 Dated Dec. 15, 2010.
Notice of Allowance from U.S. Appl. No. 11/762,010 Dated Feb. 18, 2011.
Final Office Action from U.S. Appl. No. 11/828,182 Dated Dec. 22, 2010.
Non-Final Office Action from U.S. Appl. No. 11/855,826 Dated Jan. 13, 2011.
Notice of Allowance from U.S. Appl. No. 11/939,432 Dated Feb. 18, 2011.
Notice of Allowance from U.S. Appl. No. 12/144,396 Dated Feb. 1, 2011.
Non-Final Office Action from U.S. Appl. No. 12/816,756 Dated Feb. 7, 2011.
Notice of Allowance from U.S. Appl. No. 11/762,013 Dated Feb. 23, 2011.
Notice of Allowance from U.S. Appl. No. 11/929,500 Dated Feb. 24, 2011.
Final Office Action from U.S. Appl. No. 12/574,628 Dated Mar. 3, 2011.
Final Office Action from U.S. Appl. No. 11/929,571 Dated Mar. 3, 2011.
Notice of Allowance from U.S. Appl. No. 11/611,374 Dated Mar. 4, 2011.
Notice of Allowance from U.S. Appl. No. 11/553,399 Dated Mar. 18, 2011.
Final Office Action from U.S. Appl. No. 12/507,682 Dated Mar. 29, 2011.
Non-Final Office Action from U.S. Appl. No. 11/929,403 Dated Mar. 31, 2011.
Non-Final Office Action from U.S. Appl. No. 11/929,417 Dated Mar. 31, 2011.
Notice of Allowance from U.S. Appl. No. 12/838,896 Dated Apr. 19, 2011.
Notice of Allowance from U.S. Appl. No. 11/702,981 Dated Apr. 25, 2011.
Notice of Allowance from U.S. Appl. No. 11/929,320 Dated May 5, 2011.
Final Office Action from U.S. Appl. No. 11/939,440 Dated May 19, 2011.
International Search Report for related foreign application PCT/US06/34390, Nov. 2007.
Written Opinion of the International Searching Authority for related foreign application PCT/US06/34390, Nov. 2007.
Final Office Action from U.S. Appl. No. 11/855,805, Dated May 26, 2011.
Non-Final Office Action from U.S. Appl. No. 11/672,921 Dated May 27, 2011.
Notice of Allowance from U.S. Appl. No. 11/762,010 Dated Jun. 8, 2011.
Non-Final Office Action from U.S. Appl. No. 11/672,924 Dated Jun. 8, 2011.
Non-Final Office Action from U.S. Appl. No. 11/929,225 Dated Jun. 8, 2011.
Notice of Allowance from U.S. Appl. No. 11/929,500 Dated Jun. 13, 2011.
Notice of Allowance from U.S. Appl. No. 11/941,589 Dated Jun. 15, 2011.
Final Office Action from U.S. Appl. No. 12/057,306 Dated Jun. 15, 2011.
Final Office Action from U.S. Appl. No. 12/769,428 Dated Jun. 16, 2011.
Notice of Allowance from U.S. Appl. No. 12/203,100 Dated Jun. 17, 2011.
Notice of Allowance from U.S. Appl. No. 11/762,013 Dated Jun. 20, 2011.
Non-Final Office Action from U.S. Appl. No. 12/797,557 Dated Jun. 21, 2011.
Notice of Allowance from U.S. Appl. No. 11/929,483 Dated Jun. 23, 2011.
Non-Final Office Action from U.S. Appl. No. 11/702,960 Dated Jun. 23, 2011.
Non-Final Office Action from U.S. Appl. No. 11/929,655 Dated Jun. 24, 2011.
Notice of Allowance from U.S. Appl. No. 11/763,365 Dated Jun. 24, 2011.
Notice of Allowance from U.S. Appl. No. 11/611,374 Dated Jun. 24, 2011.
Non-Final Office Action from U.S. Appl. No. 11/828,182 Dated Jun. 27, 2011.
Non-Final Office Action from U.S. Appl. No. 11/828,181 Dated Jun. 27, 2011.
Non-Final Office Action from U.S. Appl. No. 12/378,328 Dated Jul. 15, 2011.
Final Office Action from U.S. Appl. No. 11/461,420 Dated Jul. 20, 2011.
Notice of Allowance from U.S. Appl. No. 11/461,437 Dated Jul. 25, 2011.
Notice of Allowance from U.S. Appl. No. 11/702,981 Dated Aug. 5, 2011.
Notice of Allowability from U.S. Appl. No. 11/855,826 Dated Aug. 15, 2011.
Non-Final Office Action from U.S. Appl. No. 12/574,628 Dated Sep. 20, 2011.
Kellerbauer “Die Schnelle Million,” with translation, “The quick million.”
Wu et al., “eNVy: A Non-Volatile, Main Memory Storage System,” to appear in ASPLOS VI, Sep. 2008.
“Using Two Chip Selects to Enable Quad Rank,” IP.com PriorArtDatabase, copyright IP.com, Inc. 2004, Feb. 2008.
“Bios and Kernel Developer's Guide (BKDG) for AMD Family 10h Processors,” AMD, 31116 Rev 3.00, Sep. 7, 2007.
Skerlj et al., “Buffer Device For Memory Modules (DIMM)” Qimonda 2006, p. 1, Sep. 2008.
Written Opinion of the International Searching Authority for Related Foreign Application PCT/US2006/024360, Jun. 2006.
International Search Report for Related Foreign Application PCT/US2006/024360, Jun. 2006.
International Search Report for related foreign application PCT/US06/34390, 2006.
Written Opinion of the International Searching Authority for related foreign application PCT/US06/34390, 2006.
Final Office Action from U.S. Appl. No. 13/276,212, Dated Aug. 30, 2012.
Non-Final Office Action from U.S. Appl. No. 13/367,182, Dated Aug. 31, 2012.
Notice of Allowance from U.S. Appl. No. 11/461,420, Dated Sep. 5, 2012.
Final Office Action from U.S. Appl. No. 13/280,251, Dated Sep. 12, 2012.
Non-Final Office Action from U.S. Appl. No. 11/929,225, Dated Sep. 17, 2012.
Notice of Allowance from U.S. Appl. No. 12/508,496, Dated Sep. 17, 2012.
Non-Final Office Action from U.S. Appl. No. 11/672,921, Dated Oct. 1, 2012.
Notice of Allowance from U.S. Appl. No. 12/057,306, Dated Oct. 10, 2012.
Notice of Allowance from U.S. Appl. No. 12/144,396, Dated Oct. 11, 2012.
Non-Final Office Action from U.S. Appl. No. 13/411,489, Dated Oct. 17, 2012.
Non-Final Office Action from U.S. Appl. No. 13/471,283, Dated Dec. 7, 2012.
English translation of Office Action from co-pending Korean patent application No. KR1020087005172, dated Dec. 20, 2012.
Office Action, including English translation, from co-pending Japanese application No. 2008-529353, Dated Dec. 27, 2012.
Final Office Action from U.S. Appl. No. 11/672,924, Dated Feb. 1, 2013.
Office Action from co-pending European patent application No. EP12150798, Dated Jan. 3, 2013.
Non-Final Office Action from U.S. Appl. No. 13/260,650, Dated Feb. 1, 2013.
Notice of Allowance from U.S. Appl. No. 13/141,844, Dated Feb. 5, 2013.
U.S. Appl. No. 13/620,425, filed Sep. 14, 2012, Rajan et al., Memory Module With Memory Stack and Interface With Enhanced Capabilites.
U.S. Appl. No. 13/620,565, filed Sep. 14, 2012, Rajan et al., Methods and Apparatus of Stacking DRAMs.
U.S. Appl. No. 13/620,424, filed Sep. 14, 2012, Danilak et al., System and Method for Increasing Capacity, Performance, and Flexibility of Flash Storage.
U.S. Appl. No. 13/615,008, filed Sep. 13, 2012, Smith et al., Adjusting the Timing of Signals Associated With a Memory System.
U.S. Appl. No. 13/620,650, filed Sep. 14, 2012, Rajan et al., System and Method for Translating an Address Associated With a Command Communicated Between a System and Memory Circuits.
U.S. Appl. No. 13/618,246, filed Sep. 14, 2012, Smith et al., Memory Modules With Reliability and Serviceability Functions.
U.S. Appl. No. 13/620,645, filed Sep. 14, 2012, Schakel et al., Method and Apparatus for Refresh Management of Memory Modules.
U.S. Appl. No. 13/620,412, filed Sep. 14, 2012, Zohni et al., Embossed Heat Spreader.
U.S. Appl. No. 13/620,793, filed Sep. 15, 2012, Rosenband et al., Hybrid Memory Module.
U.S. Appl. No. 13/620,207, filed Sep. 14, 2012, Rajan et al., Configurable Memory System.
Non-Final Office Action from U.S. Appl. No. 11/858,518 Dated Sep. 27, 2011.
Notice of Allowance from U.S. Appl. No. 11/929,571 Dated Sep. 27, 2011.
Notice of Allowance from U.S. Appl. No. 11/929,500 Dated Sep. 27, 2011.
Notice of Allowance from U.S. Appl. No. 11/941,589 Dated Sep. 30, 2011.
Notice of Allowance from U.S. Appl. No. 12/816,756 Dated Oct. 3, 2011.
Non-Final Office Action from U.S. Appl. No. 12/508,496 Dated Oct. 11, 2011.
Non-Final Office Action from U.S. Appl. No. 11/588,739 Dated Oct. 13, 2011.
Notice of Allowance from U.S. Appl. No. 11/939,432 Dated Oct. 24, 2011.
Non-Final Office Action from U.S. Appl. No. 11/929,631 Dated Nov. 1, 2011.
Non-Final Office Action from U.S. Appl. No. 11/553,372 Dated Nov. 14, 2011.
Notice of Allowance from U.S. Appl. No. 12/769,428 Dated Nov. 29, 2011.
Final Office Action from U.S. Appl. No. 11/939,440 Dated Dec. 12, 2011.
Notice of Allowance from U.S. Appl. No. 12/797,557 Dated Dec. 28, 2011.
Office Action, including English translation, from related Japanese application No. 2008-529353, Dated Jan. 10, 2012.
Notice of Allowance from U.S. Appl. No. 12/838,896 Dated Jan. 18, 2012.
Final Office Action from U.S. Appl. No. 11/929,655 Dated Jan. 19, 2012.
Final Office Action from U.S. Appl. No. 12/378,328 Dated Feb. 3, 2012.
Final Office Action from U.S. Appl. No. 11/672,921 Dated Feb. 16, 2012.
Final Office Action from U.S. Appl. No. 11/672,924 Dated Feb. 16, 2012.
Final Office Action from U.S. Appl. No. 11/929,225 Dated Feb. 16, 2012.
Final Office Action from U.S. Appl. No. 11/828,181 Dated Feb. 23, 2012.
International Search Report for Application No. EP12150807 Dated Feb. 16, 2012.
Non-Final Office Action from U.S. Appl. No. 11/461,520 Dated Feb. 29, 2012.
Notice of Allowance from U.S. Appl. No. 12/574,628 Dated Mar. 6, 2012.
Non-Final Office Action from U.S. Appl. No. 13/276,212 Dated Mar. 15, 2012.
Non-Final Office Action from U.S. Appl. No. 13/343,612 Dated Mar. 29, 2012.
Notice of Allowance from U.S. Appl. No. 11/939,440 Dated Mar. 30, 2012.
European Search Report from co-pending European application No. 11194876.6-2212/2450798, Dated Apr. 12, 2012.
European Search Report from co-pending European application No. 11194862.6-2212/2450800, Dated Apr. 12, 2012.
Notice of Allowance from U.S. Appl. No. 11/929,636, Dated Apr. 17, 2012.
Final Office Action from U.S. Appl. No. 11/858,518, Dated Apr. 17, 2012.
European Search Report from co-pending European application No. 11194883.2-2212, Dated Apr. 27, 2012.
Non-Final Office Action from U.S. Appl. No. 11/553/372, Dated May 3, 2012.
Notice of Allowance from U.S. Appl. No. 11/929,631, Dated May 3, 2012.
Non-Final Office Action from U.S. Appl. No. 13/165,713, Dated May 22, 2012.
Non-Final Office Action from U.S. Appl. No. 12/144,396, Dated May 29, 2012.
Non-Final Office Action from U.S. Appl. No. 13/165,713, Dated May 31, 2012.
Non-Final Office Action from U.S. Appl. No. 13/280,251, Dated Jun. 12, 2012.
Final Office Action from U.S. Appl. No. 11/855,805, Dated Jun. 14, 2012.
Written Opinion of the International Searching Authority for Related Foreign Application PCT/US2006/024360, 2006.
International Search Report for Related Foreign Application PCT/US2006/024360, 2006.
Office Action, including English translation, from co-pending Japanese application No. 2008-529353, Dated Jul. 31, 2012.
Final Office Action from U.S. Appl. No. 13/315,933, Dated Aug. 24, 2012.
Notice of Allowance from U.S. Appl. No. 13/473,827, Dated Feb. 15, 2013.
Notice of Allowance from U.S. Appl. No. 12/378,328, Dated Feb. 27, 2013.
Non-Final Office Action from U.S. Appl. No. 13/536,093, Dated Mar. 1, 2013.
Office Action from co-pending Japanese patent application No. 2012-132119, Dated Mar. 6, 2013.
Notice of Allowance from U.S. Appl. No. 11/461,435, Dated Mar. 6, 2013.
Notice of Allowance from U.S. Appl. No. 13/471,283, Dated Mar. 21, 2013.
Extended European Search Report for co-pending European patent application No. EP12150807.1, dated Feb. 1, 2013, mailed Mar. 22, 2013.
Notice of Allowance from U.S. Appl. No. 13/181,716, Dated Apr. 3, 2013.
English translation of Office Action from co-pending Korean patent application No. KR1020087019582, Dated Mar. 13, 2013.
Notice of Allowance from U.S. Appl. No. 13/618,246, Dated Apr. 23, 2013.
Notice of Allowance from U.S. Appl. No. 13/182,234, Dated May 1, 2013.
Final Office Action from U.S. Appl. No. 13/315,933, Dated May 3, 2013.
English Translation of Office Action from co-pending Korean patent application No. 10-2013-7004006, Dated Apr. 12, 2013.
Non-Final Office Action from U.S. Appl. No. 13/620,793, Dated May 6, 2013.
Non-Final Office Action from U.S. Appl. No. 13/620,565, Dated May 24, 2013.
Final Office Action from U.S. Appl. No. 11/929,225, Dated May 24, 2013.
Final Office Action from U.S. Appl. No. 11/672,921, Dated May 24, 2013.
Notice of Allowance from U.S. Appl. No. 11/929,631, Dated May 28, 2013.
Notice of Allowance from U.S. Appl. No. 13/620,424, Dated May 29, 2013.
Notice of Allowance from U.S. Appl. No. 13/341,844, Dated May 30, 2013.
Non-Final Office Action from U.S. Appl. No. 13/455,691, Dated Jun. 4, 2013.
Non-Final Office Action from U.S. Appl. No. 13/620,199, Dated Jun. 17, 2013.
Non-Final Office Action from U.S. Appl. No. 13/620,207, Dated Jun. 20, 2013.
Non-Final Office Action from U.S. Appl. No. 11/828,182, Dated Jun. 20, 2013.
Final Office Action from U.S. Appl. No. 11/828,181, Dated Jun. 20, 2013.
Non-Final Office Action from U.S. Appl. No. 11/929,655, Dated Jun. 21, 2013.
Notice of Allowance from U.S. Appl. No. 13/597,895, Dated Jun. 25, 2013.
Non-Final Office Action from U.S. Appl. No. 13/620,645, Dated Jun. 26, 2013.
Notice of Allowance from U.S. Appl. No. 13/471,283, Dated Jun. 28, 2013.
Notice of Allowance from U.S. Appl. No. 13/181,747, Dated Jul. 9, 2013.
Notice of Allowance from U.S. Appl. No. 13/182,234, Dated Jul. 22, 2013.
Notice of Allowance from U.S. Appl. No. 13/181,716, Dated Jul. 22, 2013.
Non-Final Office Action from U.S. Appl. No. 13/620,233, Dated Aug. 2, 2013.
Final Office Action from U.S. Appl. No. 13/367,182, Dated Aug. 8, 2013.
Notice of Allowance from U.S. Appl. No. 13/615,008, Dated Aug. 15, 2013.
Notice of Allowance from U.S. Appl. No. 13/620,425, Dated Aug. 20, 2013.
Non-Final Office Action from U.S. Appl. No. 13/620,601, Dated Aug. 23, 2013.
Non-Final Office Action from U.S. Appl. No. 12/507,683, Dated Aug. 27, 2013.
Non-Final Office Action from U.S. Appl. No. 13/315,933, Dated Aug. 27, 2013.
Final Office Action from U.S. Appl. No. 13/620,650, Dated Aug. 30, 2013.
Notice of Allowance from U.S. Appl. No. 13/620,424, Dated Sep. 11, 2013.
Non-Final Office Action from U.S. Appl. No. 13/620,291, Dated Sep. 12, 2013.
Notice of Allowance from U.S. Appl. No. 13/341,844, Dated Sep. 17, 2013.
Notice of Allowance from U.S. Appl. No. 13/620,412, dated Sep. 25, 2013.
Non-Final Office Action from U.S. Appl. No. 13/343,852, dated Sep. 27, 2013.
English Translation of Office Action from co-pending Korean patent application No. 10-2008-7019582, dated Sep. 16, 2013.
Notice of Allowance from U.S. Appl. No. 13/620,565, dated Sep. 27, 2013.
Non-Final Office Action from U.S. Appl. No. 13/279,068, dated Sep. 30, 2013.
Notice of Allowance from U.S. Appl. No. 13/620,207, dated Oct. 9, 2013.
Non-Final Office Action from U.S. Appl. No. 13/898,002, dated Oct. 10, 2013.
Related Publications (1)
Number Date Country
20070058471 A1 Mar 2007 US
Provisional Applications (1)
Number Date Country
60713815 Sep 2005 US