The subject matter presented herein relates generally to computer memory.
Personal computers, workstations, and servers commonly include at least one processor, such as a central processing unit (CPU), and some form of memory system that includes dynamic, random-access memory (DRAM). The processor executes instructions and manipulates data stored in the DRAM.
DRAM stores binary bits by alternatively charging or discharging capacitors to represent the logical values one and zero. The capacitors are exceedingly small. Their ability to store charge can be hindered by manufacturing variations or operational stresses, and their stored charges can be upset by electrical interference or high-energy particles. The resultant changes to the stored instructions and data produce undesirable computational errors. Some computer systems, such as high-end servers, employ various forms of error detection and correction to manage DRAM errors, or even more permanent memory failures.
Memory system 100 includes sixteen data aggregators 105, one of which is shown, each servicing memory requests from a memory controller and/or processor (not shown) via eight ten-conductor 6Q/4D primary links. One or more aggregators 105 can be integrated-circuit (IC) memory buffers that buffer and steer signals between an external processor and DRAM components. Each primary link 6Q/4D communicates with a corresponding memory slice 107, each of which includes an 8 GB memory component, a stack of four fourth-generation, low-power, double-data-rate (LPDDR4) memory die in this example. Each LPDDR4 die includes two sets of eight banks 109 coupled to a DRAM interface 113 that communicates data and control signals between the DRAM stacks and a serializer/deserializer SERDES 117 via respective local sixteen-trace channels 114. A local controller 115 in each slice 107 steers data via interface 113 responsive to access requests received from the corresponding 6Q/4D primary link.
In this example, one hundred twenty-eight 8 GB slices 107 provide a total of 1 TB of memory space addressable via forty-bit physical addresses PA[39:0] (240 B=1 TB). From the requesting processor's perspective, the seven most-significant bits PA[39:33] specify a slice 107; bits PA[32:18] specify a row Row[i] of memory cells in banks 109; bits PA[17:15] specify a local channel 114; bits PA[14:11] specify a rank/bank; bits PA[10:5] specify a column; and bits PA[4:0] specify a byte. Of the rank/bank bits PA[14:11], three bits identify the rank and one bit distinguishes between two devices per secondary channel.
The external processor employing memory system 100 is configured to perceive memory system 100 as providing 896 GB. This first region, seven eighths of the usable capacity, is available to the external processor via slice-address bits Slice[6:0] in the range from 0000000b to 1101111b. In this context, “usable” memory refers to memory available to the local and remote processors, and is distinct from redundant columns of memory cells and related repair circuitry commonly included in DRAM devices to compensate for defective memory resources (e.g., defective memory cells).
Local controllers 115 can be configured to send an error message responsive to external memory requests that specify a slice address above this range (Slice[6:4]=111XXXXb]). The remaining eighth of the memory capacity, a second region of 112 GB in slice address range Slice[6:0]=111XXXXb, is inaccessible to the external processor but available to local controllers 115 to store e.g. EDC codes. Seven-eighths of the 1 TB of usable storage capacity of memory system 100 is thus allocated for data storage and one eighth reserved for e.g. EDC code storage.
Local controllers 115 remap physical address PA[10:8] to the three MSBs PA[39:37] so that the three MSBs specify the most-significant column-address bits Col[5:3]. The remaining address fields are shifted right three places in this example but can be otherwise rearranged in other embodiments. The three most-significant bits PA[39:37] of the physical address should never be 111b because the remote processer is address constrained to a maximum of 110111b. Because local controllers 115 remap the three most-significant bits to column-address bits Col[5:3], requests directed to memory system 100 will never be directed to column addresses 111XXXb. These high-order column addresses are thus reserved for EDC codes.
In the example of
Local controllers 115 take advantage of the remapped column-address bits to store data and related EDC codes in the same row of the same bank 109. As is typical in DRAM memory devices, a row of memory cells is “opened” in response to an access request, a process in which the values stored in each cell of the open row are sensed and latched. A column address then selects a column of latched bits to communicate via an external bus. Opening a row takes time and power. Reading the latched bits from different columns of the same open row is thus relatively fast an efficient. Likewise, local controllers 115 open only one row to write data and an associated EDC code the controllers 115 calculate from the data using well-known techniques.
Beginning with step 205, the selected local controller 115 directs a first access request to column Col[001000b] (or Col[08] decimal), receiving an encrypted 32 B column block 210 in response. Local controller 115 sends a second read request 215 to column Col[001001b] (Col[09]) of the same row Row[i] to obtain a second encrypted 32 B column block 220. A third read access 225 to Col[111001b] (Col[57]) reads a 32 B column block comprised of four 8 B cachelines, one cacheline for each pair of columns Col[001XXXb]. The selected local controller 115 uses the 8 B EDC cacheline 230 associated with columns Col[001000b,0010001b] to detect and correct for errors (235) in column blocks 210 and 220, and thus to provide 64 B of error-corrected data 240.
In this embodiment the error corrected data is encrypted, and column Col[111111b] (Col[63]) stores 28 byte-wide keys, one for each pair of columns in the first region, leaving an extra four bytes 241 for other purposes. In step 245, the selected local controller 115 reads the 1 B key 250 associated with columns Col[001000b,0010001b] to decrypt error-corrected data 240 (process 255) and thus provide 64 B of error-corrected, decrypted data 260. This data is passed to the SERDES 117 in the selected slice 107 and transmitted to the external processor that conveyed the initial read request (step 265). The order of column accesses to the same row can be different in other embodiments.
Local controller 115 can uses repair element 300A (300B) to store: (1) a 13-bit address to identify a defective bit location in the lower half (upper half) of the corresponding row; (2) a replacement bit D to replace a suspect bit read from the defective location; (3) a valid bit V set when local controller 115 noted the defective location and employed the repair element; and (4) a parity bit P local controller 115 set to one or zero during writes to the repair element such that the sum of the set of bits in the repair element is always even (or odd).
During a read transaction, local controller 115 considers whether either or both repair elements corresponds to a bit address retrieved in any column access of the pending transaction. If so, and if the valid and parity bits V and P indicate the availability of a valid, error-free replacement bit D, then control logic 115 substitutes the bit read from the defective location with replacement bit D. Control logic 115 may await consideration of repair elements 300A and 300B before applying ECC and decryption steps. For reduced latency, ECC and decryption steps may begin before and during consideration of repair elements 300A and 300B to be repeated with a replacement bit if a corresponding repair element is noted.
Control logic (
Table 420 highlights a configuration corresponding to Apso value six (110b) in which fourteen aggregators 402 each support eight slices 403, and each slice provides access to two stacks 415 of four 8 GB memory devices, providing 896 GB of usable memory. Of this memory, 56/64th is used for data storage and 7/64th for EDC. The remaining 1/64th is available for other uses. Each of the 112 6Q/4D primary links provides a data bandwidth of 9.6 GB/s for a total primary data bandwidth of 1,075 GB/s. Each secondary link provides a data bandwidth of 4.977 GB/s for a total secondary bandwidth of 4459 GB/s.
Slice bits SliceM are conveyed as physical slice address Aps[2:0] and column bits ColM are conveyed as physical column address Apc[2:0]. These six bits define sixty-four blocks in processor address space 510A/B. The region of processor address space 510A/B unavailable to the external processor is cross-hatched in space 510B.
Mapping logic 500 remaps addresses in which column address ColM is 111b to a higher address range, as indicated by arrows, to reserve column addresses Col[111XXXb] for EDC values, etc., as detailed in connection with memory system 100 of
While the invention has been described with reference to specific embodiments thereof, it will be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention. For example, features or aspects of any of the embodiments may be applied, at least where practicable, in combination with any other of the embodiments or in place of counterpart features or aspects thereof. Moreover, some components are shown directly connected to one another while others are shown connected via intermediate components. In each instance the method of interconnection, or “coupling,” establishes some desired electrical communication between two or more circuit nodes, or terminals. Such coupling may often be accomplished using a number of circuit configurations, as will be understood by those of skill in the art. Therefore, the spirit and scope of the appended claims should not be limited to the foregoing description. Only those claims specifically reciting “means for” or “step for” should be construed in the manner required under the sixth paragraph of 35 U.S.C. § 112.
Number | Name | Date | Kind |
---|---|---|---|
6262924 | Fukutani | Jul 2001 | B1 |
7877665 | Mokhlesi | Jan 2011 | B2 |
10613924 | Ware et al. | Apr 2020 | B2 |
11327831 | Ware et al. | May 2022 | B2 |
20080034270 | Onishi | Feb 2008 | A1 |
20080163028 | Mokhlesi | Jul 2008 | A1 |
20090019321 | Radke | Jan 2009 | A1 |
20090019340 | Radke | Jan 2009 | A1 |
20100235695 | Lee | Sep 2010 | A1 |
20110209028 | Post | Aug 2011 | A1 |
20120311406 | Ratnam | Dec 2012 | A1 |
20130080856 | Bueb | Mar 2013 | A1 |
20130086312 | Miura | Apr 2013 | A1 |
20140047246 | Seol | Feb 2014 | A1 |
20140108889 | Shaeffer | Apr 2014 | A1 |
20150067448 | Son | Mar 2015 | A1 |
20160005452 | Bae | Jan 2016 | A1 |
Entry |
---|
Doe Hyun et al. “Virtualized and Flexible ECC for Main Memory”, Fifteenth International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), Mar. 17, 2010. 26 pages. |
Number | Date | Country | |
---|---|---|---|
20230315563 A1 | Oct 2023 | US |
Number | Date | Country | |
---|---|---|---|
62507514 | May 2017 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17721735 | Apr 2022 | US |
Child | 18306542 | US | |
Parent | 16832263 | Mar 2020 | US |
Child | 17721735 | US | |
Parent | 15963163 | Apr 2018 | US |
Child | 16832263 | US |