The present invention is related generally to the field of semiconductor memory devices, and more particularly, to an interface circuit and method for a pseudostatic memory device.
A class of memory devices called pseudo-static memory are typically memory devices that are functionally equivalent to static random access memory (SRAM) devices, but include internal refresh circuitry, so that the devices appear to the use as not needing refresh operations. In general, these memory devices can be operated in the same manner one would operate a conventional SRAM, but have a memory core based on conventional dynamic random access memory (DRAM) cells. As is well known in the art, a major distinction between the two types of memory cells is that DRAM memory cells need to be periodically refreshed to maintain the stored data whereas SRAM memory cells do not.
There are advantages to employing a conventional DRAM memory core over a conventional SRAM memory core in a memory device. For example, memory density for a DRAM memory array can be much greater than that for a SRAM memory array. In the case of a DRAM memory cell, only one transfer gate and a storage device, typically a capacitor, is necessary to store one bit of data. Consequently, each DRAM memory cell is considerably smaller than a conventional SRAM memory cell, which may have as many as six transistors per memory cell. The simple structure and smaller size of the DRAM memory cell translates into a less complicated manufacturing process, and consequently, lower fabrication costs when compared to the SRAM memory cell.
In spite of the aforementioned advantages provided by a DRAM memory core, there are issues related to the design and operation of a conventional DRAM memory array that make its application undesirable. For example, as previously mentioned, DRAM memory cells need to be refreshed periodically or the data stored by the capacitors will be lost. As a result, additional circuitry must be included in the memory device to support the refresh operation. It is also generally the case that access times for DRAM memory cores are greater than the access times for SRAM memory cores.
Additionally, a memory access operation for a conventional DRAM memory core is such that once the operation has begun, the entire access cycle should be completed or the data will be lost. That is, a DRAM access cycle begins with a row of memory cells in the array being activated, and the respective charge state of the memory cells for the activated row are sensed and amplified. A column including a particular memory cell of the activated row is selected by coupling the column to an input/output line. At this time, data can be read from or written to the particular memory cell. Following the read or write operation, the row of memory cells is deactivated, thus, storing the charge state in the respective capacitors of the memory cells. As is generally known, the process of sensing the charge state of the memory cells is destructive, and unless the access cycle is completed with the charge state being amplified and the row being deactivated, the data stored by the memory cells of the activated row will be lost. In contrast, for a conventional asynchronous SRAM memory device, the SRAM sense operation is non-destructive and does not have the same type of access cycle as a conventional DRAM memory device. Consequently, random memory addresses may be asserted to the SRAM memory device without timing restriction, and data is always expected to be returned in a certain time thereafter. This time is typically referred to as the address access time tAA.
Therefore, it is desirable to have a circuit that can accommodate the asynchronous nature of an SRAM memory device and transform these actions to the scheduled events of a conventional DRAM memory access operation in order to provide an asynchronous pseudo-static memory device that employs a conventional DRAM memory core.
One aspect of the present invention is directed to a method of accessing memory cells of an array of memory cells. The method includes initiating access to the array of memory cells a time period after receiving a memory address and accessing the memory cells corresponding to the memory address unless a new memory address is received before the time period elapses. In response to receiving the new memory address, access to the memory cells corresponding to the memory address operation is not initiated and access to the memory cells corresponding to the new memory address is initiated the time period after receiving the new memory address. The time period sufficient to allow access to the array of memory cells for a previous memory operation to complete. Another aspect of the invention is directed to a pseudo-static memory device that includes an address interface circuit configured to initiate a memory operation a minimum time following receipt of a last received memory address. The address interface circuit aborts a previous memory operation before initiated in the event the last received memory address is received before the minimum time elapses.
Embodiments of the present invention are directed to an asynchronous interface circuit that converts randomly scheduled address transitions, such as those applied to an SRAM device, into scheduled address events which can be asserted to a conventional DRAM memory core in an orderly fashion. Certain details are set forth below to provide a sufficient understanding of the invention. However, it will be clear to one skilled in the art that the invention may be practiced without these particular details. In other instances, well-known circuits, control signals, and timing protocols have not been shown in detail in order to avoid unnecessarily obscuring the invention.
Illustrated in
The asynchronous interface circuit 100 can be used with a conventional DRAM memory core to provide an asynchronous pseudo-static SRAM operation. As previously mentioned, a conventional DRAM memory core is not well suited for the asynchronous nature of a conventional SRAM address interface because random addresses can be asserted without timing restriction. In the case of a read operation, a time period after the address is asserted, typically referred to as the address access time, tAA, output data is provided. In the event the timing specifications are violated, and the address changes before the output data is provided, data stored by the SRAM memory core will not be lost because of the manner in which data is stored by conventional SRAM memory cells. In contrast, in a conventional DRAM memory core, once memory access of a memory location has begun, the access operation must be completed or run the risk of losing data since DRAM has intrinsically a destructive read sequence. As will be explained in more detail below, the asynchronous interface circuit 100, however, can take randomly scheduled address transitions, such as those allowed for conventional SRAM devices, and convert them into scheduled events which can be asserted to a DRAM memory core in an orderly fashion
With reference to
In operation, the delay circuit 120 generates a PULSE_OUT pulse a time delay td after the falling edge of the most recent ATD_IN pulse. The time delay td is approximately the sum of the delay of each delay stage 140. In an effort to simplify explanation of the delay circuit 120, any gate delays have been ignored. However, it will be appreciated that some time will be added to the time delay td because of the gate delays. When the delay circuit 120 receives an ATD_IN pulse, the output of the inverter 152 goes HIGH and the delay output of each of the delay stages 140 go HIGH tdd after the rising edge of the ATD_IN pulse. On the falling edge of the ATD_IN pulse, the delay circuit begins counting the time delay td. That is, for the first delay stage 140 in the chain, its delay output will go LOW tdd after the falling edge of the ATD_IN pulse. The delay output of the second delay stage 140 will go LOW tdd after the falling edge of the delay output of the first delay stage 140. Thus, the falling edge of the ATD_IN pulse will trickle through the chain of delay stages 140 until being applied to the input of the NOR gate 150. Note that during this time, the output of the inverter 152 has remained HIGH. Not until the delay output of the last delay stage 140 goes LOW, which occurs td after the falling edge of the ATD_IN signal, will the output of the inverter 152 go LOW. When this does occur, the pulse generator 154 then generates a PULSE_OUT pulse that can be used to start an access operation to a DRAM memory core.
In the case where a second ATD_IN pulse is received by the delay circuit 120 before the td timing count has elapsed, the delay stages 140 of the timing chain are essentially reset because the delay output of each of the delay stages 140 will go HIGH again in response to the new ATD_IN pulse. The td countdown will begin again in response to the falling edge of the new ATD_IN pulse, as previously described. In effect, the pulse generator 154 will not generate a PULSE_OUT pulse until td after the falling edge of the last ATD_IN pulse provided to the delay circuit 120, and consequently, no access operation will be initiated until that time.
Thus, it can be seen that an unrestricted address transition input pattern can be converted by the asynchronous interface circuit 100 (
The pulse circuit 200 includes an active HIGH S-R latch 202 formed from two cross-coupled NOR gates. The latch 202 has a first input coupled to receive the IN signal and a second input coupled to the output of a two-input NOR gate 204. The output of the latch 202 is coupled to an inverter 206, from which the OUT signal is provided. The output of the latch 202 is also coupled to a first input of the NOR gate 204 through a delay device 208 having a time delay of tw. A second input off the NOR gate 204 is coupled to receive the IN signal. As a result, the pulse circuit 200 will generate an OUT signal having a pulse width of at least tw from a pulse IN signal, even if the pulse width of the IN signal is less than tw, and no matter how many times the IN signal transitions during the time tw.
It will be appreciated that the length of delay time td for the delay circuit 120 (
The selection of td will determine to some degree the delay time tdd of the delay device 182 (
With respect to selecting a suitable time delay tw for the delay device 208 in the pulse circuit 200 (FIG. 4), tw can be selected so that a pulse generated by the pulse circuit 200 will ensure that each of the delay stages 140 will be reset, even if the input pulse to the pulse circuit 200 is less than tw.
As previously mentioned, it will be appreciated that the polarity of many of the signals can be reversed without departing from the scope of the present invention. Consequently, alternative embodiments of the invention can be implemented through the use of alternative circuitry that accommodate the reversed signal polarity and remain within the invention. For example, the delay stage 140 (
The row and column addresses are provided by address input buffers (not shown) included in the asynchronous interface circuit 510 for decoding by a row address decoder 524 and a column address decoder 528, respectively. Memory array read/write circuitry 530 are coupled to the array 502 to provide read data to a data output buffer 534 via a input-output data bus 540. Write data are applied to the memory array 502 through a data input buffer 544 and the memory array read/write circuitry 530. The command controller 506 responds to memory commands applied to the command bus 508 to perform various operations on the memory array 502. In particular, the command controller 506 is used to generate internal control signals to read data from and write data to the memory array 502. During one of these access operations, an address provided on the address bus 520 is decoded by the row decoder 524 to access one row of the memory array 502. Likewise, input provided on the address bus 520 is decoded by the column decoder 528 to access at least one column of the memory array 502. During a read operation, the data stored in the addressed memory cell or cells are then transferred to the output buffer 534 and provided on the data output lines. In a write operation, the addressed memory cell is accessed and data provided on the data input lines and the data input buffer 544 is stored in the cell.
From the foregoing it will be appreciated that, although specific embodiments of the invention have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the invention. Accordingly, the invention is not limited except as by the appended claims.
This application is a continuation of U.S. patent application Ser. No. 10/102,221, filed Mar. 19, 2002, U.S. Pat. No. 6,690,606.
Number | Name | Date | Kind |
---|---|---|---|
4293926 | Amano | Oct 1981 | A |
5258952 | Coker et al. | Nov 1993 | A |
5374894 | Fong | Dec 1994 | A |
5471157 | McClure | Nov 1995 | A |
5566129 | Nakashima et al. | Oct 1996 | A |
5600605 | Schaefer | Feb 1997 | A |
5666321 | Schaefer | Sep 1997 | A |
5802555 | Shigeeda | Sep 1998 | A |
5805517 | Pon | Sep 1998 | A |
5835440 | Manning | Nov 1998 | A |
6058070 | La Rosa | May 2000 | A |
6075751 | Tedrow | Jun 2000 | A |
6166990 | Ooishi et al. | Dec 2000 | A |
6373303 | Akita | Apr 2002 | B1 |
6396758 | Ikeda et al. | May 2002 | B1 |
6507532 | Fujino et al. | Jan 2003 | B1 |
6564285 | Mills et al. | May 2003 | B1 |
6597615 | Mizugaki | Jul 2003 | B1 |
6636449 | Matsuzaki | Oct 2003 | B1 |
6658544 | Gray | Dec 2003 | B1 |
6675256 | Harrand | Jan 2004 | B1 |
6690606 | Lovett et al. | Feb 2004 | B1 |
6701419 | Tomaiuolo et al. | Mar 2004 | B1 |
6714479 | Takahashi et al. | Mar 2004 | B1 |
6920524 | Lovett | Jul 2005 | B1 |
Number | Date | Country |
---|---|---|
411238380 | Aug 1999 | JP |
Number | Date | Country | |
---|---|---|---|
20040141397 A1 | Jul 2004 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10102221 | Mar 2002 | US |
Child | 10754658 | US |