Embodiments of the present invention are in the field of dynamic random access memory as, for example, used for data memory for processors.
For a more complete understanding of the present invention, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
Among the below described Embodiments, some embodiments comprise a DRAM-chip (DRAM=Dynamic Random Access Memory) with a data I/O-interface (I/O=Input/Output) of an access width equal to a page size. A page size may, for example, be determined as the number of bits which are activated by a row command. According to embodiments described below, the page size may be identical to a prefetch size. The prefetch size equals the number of bits accessed with one read or write command. In other words, the prefetch size equals the I/O-width times the burst length of the DRAM.
According to some of the following embodiments, a row command includes the information as to whether a read or write access is desired while the provision of a separate second or column command (including column address) is omitted. Rather, according to embodiments described below, there are no column locations to be selected by column addresses. According to the latter embodiments, all of the columns of a given word line may be used for an access operation. After activation of a word line and sensing of corresponding memory cells, the DRAM may automatically and in a self-timed manner start a read or write operation.
Each row address activates one individual word line, as exemplarily indicated by the arrow pointing from the left to the right in
According to an embodiment, the DRAM chip 100 can have a reduced page size and an increased prefetch size relative to a DRAM chip using column addressing, so as to yield a page size equal to the prefetch size, and so that no column selection can be carried out in the DRAM chip 100. The I/O interface of the DRAM chip 100 via which the data accessed by one read or write command leave or enter the chip 100, is indicated by 103.
The DRAM chip 100 may comprise a memory organized in rows, the rows being addressable by row addresses. Moreover, the DRAM chip 100 may comprise a row address decoder 104 being responsive to a row address to activate an associated row, and sense amplifiers 106 being assigned or connectable to the associated rows, so as to sense data of a page size upon activation of the row currently indicated by the row address. In particular, each word line 102 may be connected to several memory cells 108 so as to connect them to a respective bit line 110 upon the word line being activated by a respective row address. The bit lines, in turn, may be connectable to sense amplifiers 106.
Further, the DRAM chip 100 may comprise memory organized in rows, wherein a row size equals a page size. Furthermore, the memory may comprise memory cells 108 and each row address may activate a number of memory cells 108 equal to the page size. In other embodiments, the DRAM chip 100 may further comprise an I/O interface 103 being adapted for receiving a combined activation and read command or a combined activation and write command so that the information as to whether a read or a write is to be processed by the DRAM chip 100 is provided to the DRAM chip along with the activation command.
In embodiments, a DRAM chip 100 may be implemented in a housing 112, wherein the housing 112 may comprise as many pins 114 as bits in a page of binary data. The DRAM chip 100 may even comprise more than this number of pins such as a chip selection pin. The DRAM chip 100 can be adapted for providing data on the I/O-interface or for storing data from the I/O-interface based on a combined activation and read command or a combined activation and write command and a row address.
Embodiments may enable usage with a reduction of the required command bandwidth due to the combined commands. Moreover, embodiments may enable a reduced power consumption due to elimination of column address and associated decoding paths, coming along with reduced latency due to the elimination of margins possibly required for safe timing of consecutive column operations. Moreover, embodiments may enable to be used as third level cache combined with first and second level cache comprising SRAMs.
Embodiments may utilize a prefetch size, which is sufficiently large, i.e., which equals a page size. The page size may be between 64 bytes and 1K bytes.
Embodiments which are for use of a DRAM as a third level cache can utilize a prefetch size, for example, of at least in the range of 64 bytes to 1K bytes.
With respect to the conventional command scheme, it can be seen at the top of
At the bottom of
Embodiments may enable applications requiring access to a relatively high number of bits at a time as, for example, 1K bits, where conventional DRAM may not be the best compromise anymore. Embodiments may enable usage of DRAM as third level cache for CPUs (CPU=Central Processing Unit), where the embodiments may provide a wide interface with low latency and without multiplexing of row and column addresses.
Embodiments may provide easy access. That is, execution of consecutive commands for read and write accesses may not be necessary. Embodiments may utilize a row command combined with an activate and read or an activate and write command including transfer of row addresses without having the necessity for a column address. Embodiments may, therefore, save valuable command bandwidth, as only row addresses are utilized. Moreover, embodiments may be more robust for different PVT (PVT=Process Voltage Temperature) conditions, as no safe timing is required between row and column addresses.
Depending on certain implementation requirements of the inventive methods, the methods presented above can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example, a disc, DVD or CD having electronically stored control signals stored thereon, which incorporate with the programmable computer system such that the inventive methods are performed. Generally, the methods presented above may, therefore, be implemented as a computer program product having a program code stored on a machine-readable carrier, the program code being operative for performing the respective methods when the computer program product runs on a computer. In other words, the above methods may be implemented as a computer program having a program code for performing at least one of the respective methods when the computer program runs on a computer.
While the above invention has been described in terms of several preferred embodiments, there are alterations, permutations, and equivalents which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations, and equivalents as fall within the true spirit and scope of the present invention.