The field of invention pertains generally to the computing sciences and, more specifically, to a method and apparatus for multi-level memory early page demotion
A pertinent issue in many computer systems is the system memory (also referred to as “main memory”). Here, as is understood in the art, a computing system operates by executing program code stored in system memory and reading/writing data that the program code operates on from/to system memory. As such, system memory is heavily utilized with many program code and data reads as well as many data writes over the course of the computing system's operation. Finding ways to improve system memory accessing performance is therefore a motivation of computing system engineers.
A better understanding of the present invention can be obtained from the following detailed description in conjunction with the following drawings, in which:
According to various embodiments, near memory 113 has lower access times than the lower tiered far memory 114 For example, the near memory 113 may exhibit reduced access times by having a faster clock speed than the far memory 114. Here, the near memory 113 may be a faster (e.g., lower access time), volatile system memory technology (e.g., high performance dynamic random access memory (DRAM) and/or SRAM memory cells) co-located with the memory controller 116. By contrast, far memory 114 may be either a volatile memory technology implemented with a slower clock speed (e.g., a DRAM component that receives a slower clock) or, e.g., a non volatile memory technology that is slower (e.g., longer access time) than volatile/DRAM memory or whatever technology is used for near memory.
For example, far memory 114 may be comprised of an emerging non volatile random access memory technology such as, to name a few possibilities, a phase change based memory, a three dimensional crosspoint memory, “write-in-place” non volatile main memory devices, memory devices having storage cells composed of chalcogenide, multiple level flash memory, multi-threshold level flash memory, a ferro-electric based memory (e.g., FRAM), a magnetic based memory (e.g., MRAM), a spin transfer torque based memory (e.g., STT-RAM), a resistor based memory (e.g., ReRAM), a Memristor based memory, universal memory, Ge2Sb2Te5 memory, programmable metallization cell memory, amorphous cell memory, Ovshinsky memory, etc. Any of these technologies may be byte addressable so as to be implemented as a system memory in a computing system (also referred to as a “main memory”) rather than traditional block or sector based non volatile mass storage.
Emerging non volatile random access memory technologies typically have some combination of the following: 1) higher storage densities than DRAM (e.g., by being constructed in three-dimensional (3D) circuit structures (e.g., a crosspoint 3D circuit structure)); 2) lower power consumption densities than DRAM (e.g., because they do not need refreshing); and/or, 3) access latency that is slower than DRAM yet still faster than traditional non-volatile memory technologies such as FLASH. The latter characteristic in particular permits various emerging non volatile memory technologies to be used in a main system memory role rather than a traditional mass storage role (which is the traditional architectural location of non volatile storage).
Regardless of whether far memory 114 is composed of a volatile or non volatile memory technology, in various embodiments far memory 114 acts as a true system memory in that it supports finer grained data accesses (e.g., cache lines) rather than only larger based “block” or “sector” accesses associated with traditional, non volatile mass storage (e.g., solid state drive (SSD), hard disk drive (HDD)), and/or, otherwise acts as a byte addressable memory that the program code being executed by processor(s) of the CPU operate out of.
In various embodiments, system memory may be implemented with dual in-line memory module (DIMM) cards where a single DIMM card has both volatile (e.g., DRAM) and (e.g., emerging) non volatile memory semiconductor chips disposed on it. In other configurations DIMM cards having only DRAM chips may be plugged into a same system memory channel (e.g., a double data rate (DDR) channel) with DIMM cards having only non volatile system memory chips.
In another possible configuration, a memory device such as a DRAM device functioning as near memory 113 may be assembled together with the memory controller 116 and processing cores 117 onto a single semiconductor device (e.g., as embedded DRAM) or within a same semiconductor package (e.g., stacked on a system-on-chip that contains, e.g., the CPU, memory controller, peripheral control hub, etc.). Far memory 114 may be formed by other devices, such as slower DRAM or non-volatile memory and may be attached to, or integrated in the same package as well. Alternatively, far memory may be external to a package that contains the CPU cores and near memory devices. A far memory controller may also exist between the main memory controller and far memory devices. The far memory controller may be integrated within a same semiconductor chip package as CPU cores and a main memory controller, or, may be located outside such a package (e.g., by being integrated on a DIMM card having far memory devices).
In various embodiments, at least some portion of near memory 113 has its own system address space apart from the system addresses that have been assigned to far memory 114 locations. In this case, the portion of near memory 113 that has been allocated its own system memory address space acts, e.g., as a higher priority level of system memory (because it is faster than far memory). In further embodiments, some other portion of near memory 113 may also act as a memory side cache (that caches the most frequently accessed items from main memory (which may service more than just the CPU core(s) such as a GPU, peripheral, network interface, etc.) or last level CPU cache (which only services CPU core(s)).
Here, as is known in the art, in a traditional computer system, the program code and/or data of a software program is kept on one or more “pages” of information. When a software program is to be executed by the computing systems CPU core(s), one or more of the software program's pages are called up from non-volatile mass storage (e.g., a disk-drive) by the system software and written into system memory. The CPU core(s) then issue memory read requests for program code and memory read and write requests for data that are on the pages in order to physically execute the software program out of system memory.
In the case of the 2LM system of
Ideally, the more frequently accessed pages will be placed in near memory 213 instead of the far memory 214 because of the near memory's faster access times. As such, in various embodiments, the system software will place frequently used and/or currently active pages in near memory 213 as best as is practicable.
Accordingly, when pages in far memory 214 are identified by the system software as being frequently used (e.g., as measured by number of accesses over a time window) or currently active (e.g., as measured from the addresses of current read/write requests), the system software may desire to move such pages from the far memory 214 to the near memory 213. So doing, however, particularly when near memory 213 is full and does not have any free space to accept a new page without demoting another page from near memory 213 to far memory 214, can create system bottlenecks.
Here, the above described process entails: 1) shooting down translation look-aside (TLB) buffer entries in the CPUs to reflect the new physical address of the demoted page from a near memory physical address to a far memory physical address (explained in more detail further below); 2) moving a large amount of information from near memory to far memory (i.e., the demoted page's information which may be many kilobytes (KBs) or possibly even mega bytes (MBs)); 3) shooting down TLB entries in the CPUs to reflect the new physical address of the promoted page from a far memory physical address to a near memory physical address; and 4) moving a large amount of information from far memory to near memory (i.e., the promoted page's information which may be many kilobytes (KBs) or possibly even mega bytes (MBs)).
Here, in a situation where a page 215 has to be demoted 217 from near memory 213 in order to make room in the near memory 213 for a page that is to be promoted to near memory 213 from far memory 214, the system software has to intercede to perform the successful page movement which not only entails the issuance of the appropriate read/write requests to physically swap the pair of pages' worth of information between the two memories but also has to temporarily stall the CPUs to update their respective TLBs.
As is known in the art, a CPU instruction execution pipeline typically includes a memory management unit that keeps a TLB. The TLB is essentially a table that records which virtual addresses map to which actual, physical addresses in system memory 212. Here, software program code is typically written to refer to memory as if it keeps little/no other software. As such, for example, many software programs are written to initially refer to a base memory address of 0000 and then incrementally add addresses as needed. More than one software program written in such a manner could not operate out of a same memory (their “virtual” addresses would collide).
Therefore a TLB is used to translate each virtual address specified by a particular software program/thread into a physical memory address where the information referred to by the virtual address actually resides in system memory 212. By mapping numerically identical virtual addresses of different software programs to different system memory physical addresses, two different software programs with substantially over-lapping virtual address space can easily operate out of two different physical address regions of a same system memory 212.
When a page of information for a particular software program is moved from one memory location to another (e.g., from near memory 213 to far memory 214 or vice-versa), the TLB entries maintained by the CPU cores for the page must be updated to reflect the new physical location of the page within the memory 212. Updating a TLB can negatively impact CPU instruction execution pipeline performance because, e.g., the execution of memory read/write instructions is stalled waiting for the TLB to be updated.
Here, negative performance implications may result if page swapping to effect promotion of a page from far memory 214 to near memory 213 is performed in an instantaneous, impromptu and/or reactionary per page fashion (“fine grained”). That is, if the system software is frequently deciding that certain far memory pages should be promoted to near memory 213 when near memory 213 is full in combination with the system software being designed merely to react to each such decision by immediately directing a corresponding page swap and associated TLB shootdowns, then, the overall performance of the computing system may noticeably degrade because the system memory 212 will be unavailable more regularly moving large amounts of data between near memory 213 and far memory 214 to effect the page swaps, and, the CPU instruction execution pipelines will be stalled more regularly waiting for the TLB shootdowns to be fully processed.
In other embodiments at least some portion of these circuits are implemented off of such a chip. For instance, in the case where the far memory 314 is implemented with emerging non volatile memory chips, a far memory controller may be locally coupled to such memory chips off the main system-on-chip die (e.g., on one or more DIMMs having the emerging non volatile memory chips). Here, the far memory controller(s) may include the far memory state tracking circuitry 322 or some portion thereof. Alternatively or in combination, the near memory state tracking circuitry 322 or some portion thereof may be disposed outside such a chip (e.g., on one or more DIMMs having the volatile (e.g., DRAM) memory chips where such DIMM(s) may even include emerging non volatile memory chips and even the far memory state tracking circuitry 322 or some portion thereof). In various possible packaging scenarios, even if such circuits are located off a system-on-chip as described above they may nevertheless exist within the same package as the system-on-chip (e.g., such as in a same semiconductor chip package where memory chips and associated external logic from the system-on-chip are integrated in a stacked chip solution).
The early demotion logic circuitry 320 seeks to start the eviction process sooner by physically demoting pages that are “next in line to be evicted” from near memory 313 but have not, as of yet, been formally evicted from near memory 313 by system software because system software has not yet identified a next page (or pages) to be promoted from far memory 314 to near memory 313.
That is, the early demotion logic 320 physically demotes pages from near memory 313 that are expected to be demoted from near memory 313 in the near future but have not actually been demoted by system software yet. By actually demoting the pages from near memory 313 before the system software explicitly commands their demotion, the underlying hardware is able to implement the demotion more opportunistically. That is, for instance, if the early demotion logic circuitry 320 is able to begin the process of demoting 316 a page 315, e.g., a few seconds before the system software actually decides to demote the page 315, the memory controller 316 has a few seconds to write the page 315 into a far memory location when the far memory resources used to access that location are idling. By writing a demoted page 315 opportunistically, the aforementioned system bottleneck induced by instantaneous, impromptu and/or reactionary per page data movement decisions can be noticeably lessened.
Moreover, as described in more detail below, the early write remapping buffer 323 allows the hardware to operate correctly in advance of any TLB updates. That is, a page 315 can be demoted to far memory 314 and be successfully accessed afterward before its corresponding TLB entry is updated by system software. As a consequence, proper hardware operation is not tightly coupled to correct TLB state. The system software can therefore update the CPU TLBs downstream “in batches” in which multiple TLB entries are updated for multiple demoted pages during a single TLB update cycle. Thus, for any imposed stall of an instruction execution pipeline to update a TLB, multiple entries are concurrently updated within the TLB rather than imposing a separate stall for each entry needing an update. In this respect, the page demotion process of the improved system of
As observed in
The near memory state tracking circuitry 421 also tracks how many pages are in near memory 413 and/or how many free pages exist in near memory 413. Each free page corresponds to unused memory capacity in near memory 413 that could be used to accept a page that has been promoted to near memory 413 from far memory 414. In various embodiments, if the number of pages in near memory 413 exceeds some threshold and/or if the number of free pages in near memory 413 falls below some threshold, then early demotion activity is triggered. As of the near memory state in
Referring to
As soon as a free page listed in the buffer 423 is also recognized as currently having idle far memory resources, the memory controller 416 updates it corresponding entry 434 in the buffer: 1) to include the near memory address (“Addr_A”) where the page 433 being demoted was stored in near memory; 2) set both its occupied bit (“Occ”) and its write protection bit (“WP”) to 1. The setting of the occupied bit means the entry is valid and should not be removed or written over. The setting of the write protection bit means that the page 433 being demoted should not be written to because it is currently in transit from the near memory 413 to the far memory 414. Here, write requests received by the memory controller 416 are queued (not serviced) so long as the write protection bit is set.
The memory controller 416 also begins the process of reading the page 433 from near memory 413 and writing it into far memory 414. Depending on the state of the page's migration from near memory 413 to far memory 414, any read requests received by the memory controller 416 during the migration may be serviced from near memory 413, internally from the memory controller 416 if it has possession of the targeted chunk of the page, or far memory 414. In various embodiments, read accesses that target the migrating page received by the memory controller 416 after the first write request to be received that targets the migrating page 433 are also queued, or, serviced if they do not conflict (have same or overlapping target address space) as any queued write request that targets the migrating page 433.
Referring to
Referring to
In an embodiment, the early demotion circuit 420 makes available to the system software (e.g., by way of register space 436 that is readable by the system software) the addresses of the pages that the near memory state tracking circuitry 421 has identified as being next-in-line for eviction. As such, assuming the system software had read these addresses before the system software knew that page 435 was to be promoted to near memory 413, the system software would have known the address of free space 437 because it had read it from register 436 beforehand. As such, in various embodiments, when the system software/CPU issues a command to promote a page from far memory 414 to near memory, it also identifies the address of the free space in near memory 413 which it had previously read from the memory controller 416 as the destination for the page. The system software may also update the TLB to reflect the new near memory address of the promoted page 435.
The system software then reads the contents of the early demotion buffer 423 having occupied bits set equal to 1. Here, each page demotion to have occurred from the system state of
In an embodiment, the system software reads the contents of the buffer 423 in response to the buffer 423 having a threshold number of entries with occupied bit set to 1 and write protection bit set to 0 (which means a threshold number of entries are waiting for their respective TLB entries to be updated). By reading buffer entries after such a threshold has been passed, batch TLB update processing naturally follows. Referring to
Note that the early demotion logic circuitry 320, 420 described above and any/all of its constituent components (near memory tracking circuitry, far memory tracking circuitry and early demotion buffer) may be integrated with logic circuitry that is designed to execute some form of program code (e.g., embedded processor, embedded processor, etc.), dedicated hardwired logic circuitry (e.g., application specific integrated circuit (ASIC) circuitry), programmable logic circuitry (e.g., field programmable logic circuitry (FPGA), programmable logic device (PLD)) or any combination or logic circuitry that executes program code, dedicated hardwired logic circuitry or programmable logic circuitry.
An applications processor or multi-core processor 650 may include one or more general purpose processing cores 615 within its CPU 601, one or more graphical processing units 616, a memory management function 617 (e.g., a memory controller) and an I/O control function 618. The general purpose processing cores 615 typically execute the system and application software of the computing system. The graphics processing unit 616 typically executes graphics intensive functions to, e.g., generate graphics information that is presented on the display 603.
The memory control function 617 interfaces with the system memory 602 to write/read data to/from system memory 602. The system memory may be implemented as a multi-level system memory. The memory controller may include special hardware that demotes pages from a higher level to a lower level of system memory ahead of a page promotion from the lower to the higher level when the higher level is keeping a high concentration of pages as described in detail above.
Each of the touchscreen display 603, the communication interfaces 604-607, the GPS interface 608, the sensors 609, the camera(s) 610, and the speaker/microphone codec 613, 614 all can be viewed as various forms of I/O (input and/or output) relative to the overall computing system including, where appropriate, an integrated peripheral device as well (e.g., the one or more cameras 610). Depending on implementation, various ones of these I/O components may be integrated on the applications processor/multi-core processor 650 or may be located off the die or outside the package of the applications processor/multi-core processor 650. The power management control unit 612 generally controls the power consumption of the system 600.
Embodiments of the invention may include various processes as set forth above. The processes may be embodied in machine-executable instructions. The instructions can be used to cause a general-purpose or special-purpose processor to perform certain processes. Alternatively, these processes may be performed by specific/custom hardware components that contain hardwired logic circuitry or programmable logic circuitry (e.g., FPGA, PLD) for performing the processes, or by any combination of programmed computer components and custom hardware components.
Elements of the present invention may also be provided as a machine-readable medium for storing the machine-executable instructions. The machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs, and magneto-optical disks, FLASH memory, ROMs, RAMs, EPROMs, EEPROMs, magnetic or optical cards, propagation media or other type of media/machine-readable medium suitable for storing electronic instructions. For example, the present invention may be downloaded as a computer program which may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a modem or network connection).
In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
Number | Name | Date | Kind |
---|---|---|---|
5912839 | Ovshinsky et al. | Jun 1999 | A |
6035432 | Jeddeloh | Mar 2000 | A |
6292874 | Barnett | Sep 2001 | B1 |
7590918 | Parkinson | Sep 2009 | B2 |
7600078 | Cen et al. | Oct 2009 | B1 |
7756053 | Thomas et al. | Jul 2010 | B2 |
7774556 | Karamcheti et al. | Aug 2010 | B2 |
7913147 | Swaminathan et al. | Mar 2011 | B2 |
8051253 | Okin et al. | Nov 2011 | B2 |
8462537 | Karpov et al. | Jun 2013 | B2 |
8462577 | Zeng et al. | Jun 2013 | B2 |
8463948 | Qawami et al. | Jun 2013 | B1 |
8605531 | Kau | Dec 2013 | B2 |
8607089 | Qawami et al. | Dec 2013 | B2 |
8612676 | Dahlen et al. | Dec 2013 | B2 |
8612809 | Casper et al. | Dec 2013 | B2 |
8626997 | Qawami et al. | Jan 2014 | B2 |
8649212 | Kau et al. | Feb 2014 | B2 |
8838935 | Hinton et al. | Sep 2014 | B2 |
9087584 | Dahlen et al. | Jul 2015 | B2 |
9317429 | Ramanujan et al. | Apr 2016 | B2 |
9342453 | Nale et al. | May 2016 | B2 |
9378133 | Nachimuthu et al. | Jun 2016 | B2 |
9378142 | Ramanujan et al. | Jun 2016 | B2 |
9430372 | Nachimuthu et al. | Aug 2016 | B2 |
9529708 | Puthiyedath et al. | Dec 2016 | B2 |
9583182 | Wilkerson et al. | Feb 2017 | B1 |
9600407 | Faber | Mar 2017 | B2 |
9600416 | Ramanujan et al. | Mar 2017 | B2 |
9619408 | Nale et al. | Apr 2017 | B2 |
9690493 | Dahlen et al. | Jun 2017 | B2 |
20030023825 | Woo | Jan 2003 | A1 |
20050273584 | Wisecup et al. | Dec 2005 | A1 |
20070005922 | Swaminathan et al. | Jan 2007 | A1 |
20070067382 | Sun | Mar 2007 | A1 |
20070255891 | Chow et al. | Nov 2007 | A1 |
20080016269 | Chow et al. | Jan 2008 | A1 |
20080034148 | Gower et al. | Feb 2008 | A1 |
20080082766 | Okin et al. | Apr 2008 | A1 |
20080109629 | Karamcheti | May 2008 | A1 |
20080235443 | Chow et al. | Sep 2008 | A1 |
20080270811 | Chow et al. | Oct 2008 | A1 |
20090119498 | Narayanan | May 2009 | A1 |
20090313416 | Nation | Dec 2009 | A1 |
20100110748 | Best | May 2010 | A1 |
20100131827 | Sokolov et al. | May 2010 | A1 |
20100291867 | Abdulla et al. | Nov 2010 | A1 |
20100293317 | Confalonieri et al. | Nov 2010 | A1 |
20100306446 | Villa et al. | Dec 2010 | A1 |
20100306453 | Doller | Dec 2010 | A1 |
20100318718 | Eilert et al. | Dec 2010 | A1 |
20110047365 | Hentosh et al. | Feb 2011 | A1 |
20110060869 | Schuette | Mar 2011 | A1 |
20110153916 | Chinnaswamy et al. | Jun 2011 | A1 |
20110208900 | Schuette et al. | Aug 2011 | A1 |
20110291884 | Oh et al. | Dec 2011 | A1 |
20120023300 | Tremaine | Jan 2012 | A1 |
20130227218 | Chang | Aug 2013 | A1 |
20130268741 | Daly et al. | Oct 2013 | A1 |
20130275661 | Zimmer et al. | Oct 2013 | A1 |
20130282967 | Ramanujan | Oct 2013 | A1 |
20130290597 | Faber | Oct 2013 | A1 |
20140019677 | Chang | Jan 2014 | A1 |
20140032818 | Chang | Jan 2014 | A1 |
20140129767 | Ramanujan et al. | May 2014 | A1 |
20140297938 | Puthiyedath et al. | Oct 2014 | A1 |
20150074339 | Cheriton | Mar 2015 | A1 |
20150121024 | Kolvick | Apr 2015 | A1 |
20150199126 | Jayasena | Jul 2015 | A1 |
20160034195 | Li | Feb 2016 | A1 |
20160085585 | Chen | Mar 2016 | A1 |
20160269501 | Usgaonkar | Sep 2016 | A1 |
20160342363 | Krause | Nov 2016 | A1 |
20170031821 | Ramanujan et al. | Feb 2017 | A1 |
20170068304 | Lee | Mar 2017 | A1 |
20170139649 | Puthiyedath et al. | May 2017 | A1 |
20170249250 | Ramanujan et al. | Aug 2017 | A1 |
20170249266 | Nale et al. | Aug 2017 | A1 |
20180024923 | Hassan | Jan 2018 | A1 |
20180107598 | Prodromou | Apr 2018 | A1 |
20180136838 | White | May 2018 | A1 |
20180336142 | Pellegrini | Nov 2018 | A1 |
20180365167 | Eckert | Dec 2018 | A1 |
Number | Date | Country |
---|---|---|
1100540 | Mar 1995 | CN |
101079003 | Nov 2007 | CN |
101620539 | Dec 2013 | CN |
2005002060 | Jan 2005 | WO |
Entry |
---|
Ramos et al. (Page Placement in Hybrid Memory Systems), ACM 978-1-4503-0102; pp. 85-95 (Year: 2011). |
Bock et al. (Concurrent Page Migration for Mobile Systems with OS-Managed Hybrid Memory) ACM 978-1-4503-2870-8, pp. 10 (Year: 2014). |
Su et al. (HpMC: An Energy-aware Management System of Multi-level Memory Architectures). ACM ISBN 123-4567-24-567/08/06, Oct. 5-8, 2015, pp. 167-178 (Year: 2015). |
H.H.S. Lee, et al., “Eager Writeback—A Technique for Improving Bandwidth Utilization”, Proceedings 33rd Annual IEEE/ACM International Symposium On Microarchitecture, MICRO-332000, Monterey, Ca, 2000, pp. 11-21. |
Oskin, Mark, et al., “A Software-Managed Approach to Die-Stacked DRAM”, International Conference on Parallel Architectures and Compilation Techniques (PACT), San Francisco, CA, Oct. 2015. |
“Phase change memory-based ‘moneta’ system points to the future of computer storage”, ScienceBlog, Jun. 2, 2011, 7 pgs. |
“The Non-Volatile Systems Laboratory Coding for non-volatile memories”, http://nvsl.ucsd.edu/ecc, printed Sep. 1, 2011. 2 pgs. |
“The Non-Volatile Systems Laboratory Moneta and Onyx: Very Fast SS”, http://nvsl.ucsd.edu/moneta/, 3 pgs., Sep. 1, 2011. |
“The Non-Volatile Systems Laboratory NV-Heaps: Fast and Safe Persistent Objects”, http://nvsl.ucsd.edu/nvuheaps/, 2 pgs., Sep. 1, 2011. |
Akel et al., “Onyx: A Prototype Phase Change Memory Storage Array,” https://www.flashmemorysummit.com/English/Collaterals/Proceedings/2011/Pr- oceedings.sub.--Chrono.sub.--2011.html, Flash Memory Summit 2011 Proceedings, Aug. 11, 2011. |
Bailey et al., “Operating System Implications of Fast, Cheap, Non-Volatile Memory” 13th USENIX, HOTOS11 2011, May 9-11, 2011, 5 pages. |
Caulfield et al., “Moneta: A High-performance Storage Array Architecture for Next-generation, Non-volatile Memories”, Micro 43: Proceedings of the 43rd Annual IEEE/ACM International Symposium on Microarchitecture, Atlanta, GA Dec. 2010 pp. 385-395. |
Chen et al., “Rethinking Database Algorithms for Phase Change Memory”, 5th Biennial Conference on Innovative Data Systems Research {CIDR '11 }, Jan. 9, 2011, 11 pgs., Asilomar, California, USA. |
Condit et al, “Better 1/0 Through Byte-Addressable, Persistent Memory”, SOSP '09, Oct. 11, 2009, pp. 133-146. Big Sky, Montana, USA. |
Dhiman, et al. “PDRAM: A Hybrid PRAM and DRAM Main Memory System”, Jul. 26, 2009, Department of Computer Science and Engineering, 6 pages. |
Freitas et al., “Storage-class memory: The next storage system technology”, IBM J. Res. & Dev., Jul./Sep. 2008, pp. 439-447, vol. 52, No. 4/5. |
Jacob, “The Memory System You Can't Avoid It, You Can't Ignore It, You Can't Fake It,” Morgan & Claypool, Synthesis Lectures on Computer Architecture, vol. 4, No. 1, pp. 1-77, Jun. 2009. |
Kant, Dr. Krishna, “Exploiting NVRAM for Building Multi-Level Memory Systems”, InternationalWorkshop on Operating System Technologies for Large Scale NVRAM, Oct. 21, 2008, Jeju, Korea, 19 pages. |
Lee et al., “Architecting Phase Change Memory as a Scalable DRAM Alternative”, ISCA '09 Proceedings of the 36th Annual International Symposium on Computer Architecture, pp. 2-13, Jun. 20-24, 2009. |
Mearian, “IBM announces computer memory breakthrough Phase-change memory offers 100 times the write performance of Nano flash”, Jun. 30, 2011, 3 pgs. |
Mogul et al., “Operating System Support for NVM+DRAM Hybrid Main Memory”, 12th Workshop on Hot Topics in Operating Systems {HatOS XII), May 18, 2009, 9 pgs. |
Quereshi et al., “Scalable High Performance Main Memory System Using Phase-Change Memory Technology”, ISCA '09, Jun. 20, 2009, 10 pgs., Austin, Texas, USA. |
Raoux et al., “Phase-Change Random Access Memory: A Scalable Technology,” IBM Journal of Research and Development, vol. 52, Issue 4, pp. 465-479, Jul. 2008. |
Wu et al., “eNVy: A Non-Volatile, Main Memory Storage System,” ASPLOS VI Proceedings of the Sixth International Conference on Architectural Support for Programming Languages and Operating Systems, 12 pages, Oct. 1994. |
Number | Date | Country | |
---|---|---|---|
20190042145 A1 | Feb 2019 | US |