Claims
- 1. An apparatus for reducing memory access latency in a virtual memory based system, comprising:a processor arranged to perform executable instructions; a memory array formed of DRAM devices each of which includes a sense amp coupled to a data bus suitably coupled to the processor arranged to store executable instructions that are performed by the processor; a cache memory coupled to the processor and the main memory arranged to store a subset of recently used instructions, wherein the cache memory is located temporally closer to the processor than the main memory; a TLB page cache suitably arranged to store a TLB page, wherein under the direction of the processor, a new TLB page corresponding to a new TLB entry is moved temporally closer to the processor by moving from the main memory to the TLB page cache, wherein the TLB page cache is proximally closer to the processor than the main memory, and wherein a TLB page cache entry in the TLB page cache points to an associated TLB page, and wherein the TLB page is distributed amongst the sense amps associated with the memory array.
- 2. An apparatus as recited in claim 1, wherein the DRAM is a virtual channel type DRAM wherein the sense amp is a virtual channel.
- 3. An apparatus as recited in claim 1, wherein the system further includes a memory controller coupled to the processor and the memory, wherein the memory controller further includes the TLB page cache.
- 4. An apparatus as recited in claim 3, wherein the memory device includes a plurality of cache elements, the granularity of which corresponds in size to the TLB page.
- 5. A method for reducing memory access latency in a virtual memory based system, comprising:wherein the virtual memory based system includes, a processor arranged to perform executable instructions, a memory array formed of DRAM devices each of which includes a sense amp coupled to a data bus suitably coupled to the processor arranged to store executable instructions that are performed by the processor, a cache memory coupled to the processor and the main memory arranged to store a subset of recently used instructions, wherein the cache memory is located temporally closer to the processor than the main memory, and a TLB page cache suitably arranged to store a TLB page wherein the TLB page cache is proximally closer to the processor than the main memory wherein a TLB page cache entry in the TLB page cache points to an associated TLB page, and wherein the TLB page is distributed amongst the sense amps associated with the memory array, wherein under the direction of the processor, moving a new TLB page corresponding to a new TLB entry from the main memory to the TLB page cache such that the TLB entry is temporally closer to the processor.
- 6. A method as recited in claim 5, wherein the DRAM is a virtual channel type DRAM wherein the sense amp is a virtual channel.
- 7. A method as recited in claim 5, wherein the system further includes a memory controller coupled to the processor and the memory, wherein the memory controller further includes the TLB page cache.
- 8. A method as recited in claim 7, wherein the memory device includes a plurality of cache elements, the granularity of which corresponds in size to the TLB page.
CROSS-REFERENCE TO A RELATED APPLICATION
This application is takes priority under 35 U.S.C. §119(e) of U.S. patent application Ser. No. 60/117,886 filed Jan. 28, 1999 naming Henry Stracovsky as inventor and assigned to the assignee of the present application which is also incorporated herein by reference for all purposes.
US Referenced Citations (3)
Number |
Name |
Date |
Kind |
5630097 |
Orbits et al. |
May 1997 |
A |
5933593 |
Arun et al. |
Aug 1999 |
A |
5996055 |
Woodman |
Nov 1999 |
A |
Non-Patent Literature Citations (1)
Entry |
Compaq Computer Corporation, “Alpha 21264 Microprocessor Hardware Referance Manual”, Jul. 1999, pp. 2-11 & 2-12. |
Provisional Applications (1)
|
Number |
Date |
Country |
|
60/117886 |
Jan 1999 |
US |