The present invention relates to memory devices and in particular to a memory device architecture and method for performing high-speed bitline pre-charging operations.
The performance of a memory device is in large part judged by how fast data can be read from or written to the memory. Data reading and writing operations themselves involve many processes, one of which is the pre-charging of a selected memory cell's bitline, whereby a common bitline, which is coupled to a desired cell, is pre-charged to a predefined voltage in preparation for a data reading or writing operation. It is, therefore, important that bitline pre-charging operations be performed quickly in order to expedite data reading and writing operations.
Complementary bitlines 112 and 114 are pre-charged to a predefined voltage by means of pre-charge circuits 122 and 124, respectively. Once pre-charged, a writing voltage is supplied to the wordline WL of the selected memory cell, thereby activating the selected memory cell for a reading or writing operation.
As memory devices 100 increase in density and capacity, the number of memory cells disposed along bitlines 112 and 114 will increase, and accordingly the length of the bitlines grows longer in order to accommodate the larger number of memory cells. As the length of bitlines 112 and 114 increases, a delay effect is produced between the first and last memory cells MC1 and MCn, the magnitude of the delay being a function of the length of the bitlines 112 and 114, and the line's loading conditions.
The effect of the series resistors and the shunt capacitors combine, such that the bitline 114 develops a delay between MC1 and MCn, the delay being given by the equation:
As the memory's bitlines grow longer to accommodate a greater number of memory cells, the delay effect produced thereby increases. As a result, a substantial time delay arises between the time at which the pre-charging circuit 124 is activated and the time at which the pre-charge voltage develops at the desired memory cell, the delay being greatest for the most distally-located memory cell MCI. This delay must be factored into the total timing budget, and typically the longest delay will set the duration of the bitline pre-charge operation, as all pre-charging operations are only guaranteed if this delay is taken into account. As a result, the longest bitline pre-charge duration limits the overall speed of the memory device, especially in larger memory arrays.
What is therefore needed is a new memory device architecture and method for providing high-speed bitline pre-charging.
The present invention provides an improved memory device architecture and method for providing high-speed bitline pre-charging operations to overcome the delay effects of longer bitlines employed in high density memories. Faster bitline pre-charging enables faster memory accessing and faster programming operations.
In one representative embodiment of the invention, a memory device is presented, which includes a plurality of memory cells coupled to a bitline, and two or more pre-charging circuits coupled to the bitline. Each of the pre-charging circuits is operable to supply a pre-charge voltage to the bitline, thereby reducing the effective R-C time constant of the bitline compared with the conventional approach in which only a single pre-charging circuit is employed.
These and other features of the invention will be better understood when taken in view of the following drawings and detailed description.
For a more complete understanding of the present invention, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawing, in which:
For clarity, previously defined features retain their reference numerals in subsequent drawings.
In the exemplary embodiment of
In one embodiment as shown, each of the pre-charging circuits 224 includes a PMOS transistor having a source coupled to a pre-charging voltage VPC, which is to be applied to the bitline 214, a drain terminal coupled to the bitline 214, and a gate terminal coupled to receive a pre-charge control signal Cntl. The pre-charge control signal Cntl may be supplied via a signal divider, or similar structure, which provides substantially the same delay to each of the gate terminals, such that all of the pre-charge circuits 224 are activated substantially concurrently. The memory cells MC1-n may comprise non-volatile or volatile structures of various technologies, for example, EEPROM, FLASH, magnetic random access memory (MRAM), phase change memory (PCM), as well as other memory cells that employ line pre-charging.
Further illustrated in
At 304, the plurality of pre-charging circuits are activated substantially concurrently to apply the pre-charge voltage to the bitline. This process may be performed by supplying a common pre-charge control signal Cntl to the input of a power divider (having two or more outputs), the power divider imparting substantially the same signal delay to all of its output signals. In this manner, all of the pre-charging circuits will receive (a divided portion of) the Cntl signal substantially concurrently, resulting in the concurrent activation of the pre-charging circuits.
Optionally, the method may include coupling one or more further pre-charging circuits to the bitline. In such an embodiment, the method at 306 includes coupling the additional one or more pre-charging circuits to the memory device's bitline, and repositioning the plurality of pre-charging circuits along the bitline, such that all of the pre-charging circuits are maximally-spaced apart (308). Also at 310, the loading of each pre-charging circuit is re-scaled, such that the total loading of all the pre-charging circuits is substantially the same as the previous loading. For example, when a new pre-charging circuit 2243 (not shown) is added to the bitline, the gate periphery of each pre-charging circuit is re-scaled so as to provide one third of the total gate periphery allocated to the bitline. In this manner, the bitline's total loading is maintained.
As readily appreciated by those skilled in the art, the described processes may be implemented in hardware, software, firmware, or a combination of these implementations as appropriate. In addition, some or all of the described processes may be implemented as computer readable instruction code resident on a computer readable medium (removable disk, volatile or non-volatile memory, embedded processors, etc.), the instruction code operable to program a computer of other such programmable device to carry out the intended functions.
The foregoing description has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed, and obviously many modifications and variations are possible in light of the disclosed teaching. The described embodiments were chosen in order to best explain the principles of the invention and its practical application to thereby enable others skilled in the art to best utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims appended hereto.