Processing units (PUs) execute instructions to read, manipulate, and write data. Both the instructions and data are commonly stored in a separate memory, which is coupled to the PU via a communication channel. In a common example, a personal computer (PC) normally includes a central processing unit (CPU) coupled to a quantity of dynamic, random-access memory (DRAM) via a channel called a “memory bus.”
The speed at which a PU can process instructions depends in part on how fast the memory is able to read and write instructions and data, which in turn depends in part on the speed with which signals can be communicated over the memory bus. Faster computers ideally employ faster memory buses, so a considerable amount of resources have been expended improving the speed performance of memory buses.
Memory buses are commonly “multi-drop,” which means that a number of memory devices can share the same channel. Multi-drop buses are desirable because they allow manufactures and users the flexibility to provide different types and amounts of memory. However, multi-drop buses tend to degrade signals, and thus reduce speed performance. An alternative to multi-drop buses, so-called “point-to-point” connections, directly connect the PU to the one or more memories, and thus avoid signal degradation that results from bus sharing. One problem with these systems is that point-to-point connection resources are wasted unless the memory system has the maximum number of memories. In a topology that supports two memory modules, for example, half the point-to-point interconnects would be wasted in a one-module configuration.
The assignee of the instant application developed “Dynamic Point-to-Point (DPP)” memory-bus topologies that allow manufacturers and computer users the flexibility to provide different numbers of memory modules in a manner similar to multi-drop buses but without the wasted connection resources that can result in conventional point-to-point topologies. In DPP topologies, the same number of point-to-point connections can be used for different numbers of memories. Most memories and memory systems do not support DPP connectivity, and thus lack the benefits of these systems. There is therefore a need for simple and inexpensive means for speeding the adoption of this important technology.
The figures are illustrations by way of example, and not by way of limitation. Like reference numerals in the figures refer to similar elements.
Command multiplexer 105 directs commands received on one of two command ports CA0 and CA1 to a command decoder 125 within controller logic 110. Control logic 110 responds to decoded requests by issuing appropriately timed bank, row, and column address signals Bank/Row and Bank/Col, and control signals Ctrl0 and Ctrl1, to core 115.
Core 115 includes row and column address decoders 130 and 135, K memory banks 140[K-1:0], and a data interface with two four-bit read/write queues 145 and 150 that communicate data via respective ports DQ[3:0] and DQ[7:4]. Each bank 140, in turn, includes J sub-banks 155[J-1:0], each populated with rows and columns of memory cells (not shown), and a column multiplexer 160.
Control logic 110 and DRAM core 115 support memory functionality that is well understood by those of skill in the art. Briefly, control logic 110 decodes incoming commands and issues control and timing signals to core 115 to carry out the requested operation. For example, control logic 110 can send row address, bank address and control information to row decoder 130 in response to a row-activation command, and column address, bank address and control information to column decoder 135 and control signals to data queues 145 and 150 in response to a column-access command. Data can be read from or written to core 115 via one or both of ports DQ[3:0] and DQ[7:4] responsive to these signals.
DRAM core 115 is data-width programmable, responsive to the value stored in register 120 in this example, to communicate either four-bit-wide data on either one of ports DQ[3:0] and DQ[7:4], or eight-bit-wide data simultaneously on both. In the eight-bit configuration, control logic 110 enables both of queues 145 and 150 and the addressing provided to column decoder 135 causes column multiplexer 160 to communicate eight bits in parallel from two or more sub-banks. In the four-bit configuration, control logic 110 enables one of queues 145 and 150, and halves the number of sub-banks used for data access. Halving the number of sub-banks reduces the power required for, e.g., row activation, and consequently reduces power consumption. Other embodiments support more and different data widths.
Register 120 also controls command multiplexer 105 to determine whether commands are directed to decoder 125 via command interface CA from command port CA0 or command port CAL As detailed below, the provision for a selection between multiple command ports supports DPP connections with minimal added circuit complexity. Memory systems populated with memory devices 100 thus provide the performance of point-to-point connections without sacrificing the flexibility of multi-drop bus architectures.
Register 120 can be loaded at start-up to store a value indicative of data width and the selected command port. Register 120 can be implemented using a programmable configuration register or other volatile circuitry, or by non-volatile circuitry such as a one-time-programmable elements (e.g., fuse-controlled logic), floating-gate devices or any other nonvolatile storage. In other embodiments memory width and one of the command ports can be selected differently, such as by the use of a control pin or other types of configuration interfaces.
The mode register 120 in each memory 100 is programmed such that queues 145 and 150 (
PU 210 has four command ports, each of which directs commands to two of the eight available memory devices 100. Registers 120 in four of the eight memory devices 100 are programmed such that their respective command multiplexer 105 selects command port CA1; the remaining four memory device 100 are programmed to receive commands via port CA0. Programming can be accomplished using a mode-register command directed to a default command address on each memory device, with a mode register value for each memory device conveyed on subset of the module data ports. Each memory device could thus configure itself responsive to an appropriate register value and thereafter communicate commands and data on the selected connection resources. In other embodiments the command and data signal paths can be selected using other means, such as by programming fusable, flashable, or electrically programmable registers, or by selecting appropriate jumper settings.
In the example of
The provision of multiple command interfaces MCA[3:0] allows PU 210 to independently control fractions of memory devices 100, sets of two in this example. This technique, sometimes referred to as “threading,” allows PU 210 to divide memory interconnect 215A/B into four sub-channels that convey relatively narrow memory “threads.” Support for memory threading allows PU 210 to reduce access granularity where appropriate, and consequently reduce power consumption for memory accesses narrower than 64-bits.
Absent shorting module 225, module command interfaces MCA0 and MCA1 do not connect to module 205A; rather, traces on module 205B connect each of command interfaces MCA0 and MCA1 to a respective half of memory devices 100 on module 205B via their device command ports CA0. Every memory device 100 on both modules 205A and 205B is configured to be four-bits wide to communicate four-bit-wide data via respective module data-bus lines responsive to commands on their respective command port CA0. The two half-width modules 205A and 205B provide twice the storage space of one module.
As in the single-module example of
While the foregoing embodiments support either two or four threads per module, other embodiments can support other more or different combinations. With reference to the single- or dual-module configurations of
Each memory device 505 may be as detailed in connection
Memory devices 505 are width-configurable in this embodiment. In other embodiments buffer 510 can selectively combine fixed or variable-width memory devices to support width configurability. For example, two four-bit-wide memory die can be controlled separately to communicate four-bit-wide data, or together to communicate eight-bit-wide data.
While the present invention has been described in connection with specific embodiments, after reading this disclosure, variations of these embodiments will be apparent to those of ordinary skill in the art. Moreover, some components are shown directly connected to one another while others are shown connected via intermediate components. In each instance the method of interconnection, or “coupling,” establishes some desired electrical communication between two or more circuit nodes, or terminals. Such coupling may often be accomplished using a number of circuit configurations, as will be understood by those of skill in the art. Therefore, the spirit and scope of the appended claims should not be limited to the foregoing description. Only those claims specifically reciting “means for” or “step for” should be construed in the manner required under the sixth paragraph of 35 U.S.C. § 112.
Number | Name | Date | Kind |
---|---|---|---|
5896395 | Lee | Apr 1999 | A |
6889304 | Perego et al. | May 2005 | B2 |
7082075 | Skidmore | Jul 2006 | B2 |
7610447 | Perego et al. | Oct 2009 | B2 |
7613883 | Bellows et al. | Nov 2009 | B2 |
7660183 | Ware et al. | Feb 2010 | B2 |
7769942 | Ware et al. | Aug 2010 | B2 |
8024642 | Lastras-Montano | Sep 2011 | B2 |
8028144 | Hampel et al. | Sep 2011 | B2 |
8069379 | Perego et al. | Nov 2011 | B2 |
8193953 | Krishnamurthy et al. | Jun 2012 | B1 |
8769213 | Skinner et al. | Jul 2014 | B2 |
9158715 | Bromberg | Oct 2015 | B1 |
9275699 | Gopalakrishnan et al. | Mar 2016 | B2 |
9734112 | Gopalakrishnan | Aug 2017 | B2 |
10747703 | Gopalakrishnan et al. | Aug 2020 | B2 |
20040133736 | Kyung | Jul 2004 | A1 |
20040186956 | Perego et al. | Sep 2004 | A1 |
20050007835 | Lee et al. | Jan 2005 | A1 |
20060026349 | Gower et al. | Feb 2006 | A1 |
20060117155 | Ware et al. | Jun 2006 | A1 |
20060120169 | Tsumura | Jun 2006 | A1 |
20060259666 | Lee | Nov 2006 | A1 |
20070162656 | Bryan et al. | Jul 2007 | A1 |
20070300018 | Campbell et al. | Dec 2007 | A1 |
20080183959 | Pelley et al. | Jul 2008 | A1 |
20090164677 | Ware | Jun 2009 | A1 |
20100106917 | Ruberg et al. | Apr 2010 | A1 |
20100185811 | Kwon | Jul 2010 | A1 |
20100262790 | Perego et al. | Oct 2010 | A1 |
20140293671 | Perego | Oct 2014 | A1 |
20150089164 | Ware et al. | Mar 2015 | A1 |
Entry |
---|
Malviya, D., et al., “Module Threading Technique to Improve DRAM Power and Performance,” Design And Reuse S.A., 2009 (c), Mar. 11, 2011. 9 pages. |
Rambus Inc., “Micro-Threading,” http://www.rambus.com/us/technology/innovations/detail/microthreading.html, Mar. 3, 2011. 4 pages. |
Ware, F., et al., “Micro-threaded Row and Column Operations in a DRAM Core,” Rambus White Paper, Mar. 2005. 7 pages. |
Number | Date | Country | |
---|---|---|---|
20220374381 A1 | Nov 2022 | US |
Number | Date | Country | |
---|---|---|---|
61684437 | Aug 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16942970 | Jul 2020 | US |
Child | 17826056 | US | |
Parent | 16520137 | Jul 2019 | US |
Child | 16942970 | US | |
Parent | 15647983 | Jul 2017 | US |
Child | 16520137 | US | |
Parent | 15051282 | Feb 2016 | US |
Child | 15647983 | US | |
Parent | 13952530 | Jul 2013 | US |
Child | 15051282 | US |