Any device that stores data or instructions needs memory, and there are two broad types of memory: volatile memory and nonvolatile memory. Volatile memory loses its stored data when it loses power or power is not refreshed periodically. Non-volatile memory, however, retains information without a continuous or periodic power supply.
Random access memory (“RAM”) is one type of volatile memory. As long as the addresses of the desired cells of RAM are known, RAM may be accessed in any order. Dynamic random access memory (“DRAM”) is one type of RAM. A capacitor is used to store a memory bit in DRAM, and the capacitor may be periodically refreshed to maintain a high electron state. Because the DRAM circuit is small and inexpensive, it may be used as memory for computer systems.
Flash memory is one type of non-volatile memory, and flash memory may be accessed in pages. For example, a page of flash memory may be erased in one operation or one “flash.” Accesses to flash memory are relatively slow compared with accesses to DRAM. As such, flash memory may be used as long term or persistent storage for computer systems.
For a detailed description of various examples, reference will now be made to the accompanying drawings in which:
By prearranging, in volatile memory, data to be committed to non-volatile memory such as flash memory, time and space can be used efficiently. Specifically, by combining many small write requests into a relatively few large write operations, the speed, performance, and throughput of non-volatile memory may be improved. Placing metadata in a predictable location on each page of flash memory also improves speed, performance, and throughput of non-volatile memory. The gains in efficiency greatly outweigh any time and space used to prearrange the data.
The hybrid memo module 104 may be coupled to a memory controller 110, which may comprise circuit logic to manage data flow by scheduling reading and writing to memory. In at least one example, the memory controller 110 may be integrated with the processor 102 or the hybrid memory module 104. As such, the memory controller 110 or processor 102 may prearrange data in volatile memory 106, and commit the prearranged data to non-volatile memory 108.
In at least one example, half of the total memory in the hybrid memory module 104 may be implemented as volatile memory 106 and half may be implemented as non-volatile memory 108. In various other examples, the ratio of volatile memory 106 to non-volatile memory 108 may be other than equal amounts.
In volatile memory 106 such as DRAM, each byte may be individually addressed, and data may be accessed in any order. However, in non-volatile memory 108, data is accessed in pages. That is, in order to read a byte of data, the page of data in which the byte is located should be loaded. Similarly, in order to write a byte of data, the page of data in which the byte should be written should be loaded. As such, it is economical to write a page of non-volatile memory 108 together in one write operation. Specifically, the number of accesses to the page may be reduced resulting in time saved and reduced input/output wear of the non-volatile memory 108. Furthermore, in at least one example, a program or operating system may only be compatible with volatile memory and may therefore attempt to address individual bytes in the non-volatile memory. In such a scenario, the prearranging of data may help the non-volatile memory 108 be compatible with such programs or operating systems by allowing for the illusion of byte-addressability of non-volatile memory 108.
The volatile memory 106 may act as a staging area for the non-volatile memory 108. That is, data may be prearranged, or ordered, in the volatile memory 106 before being stored in the non-volatile memory 108 in the same arrangement or order. In at least one example, the data prearranged in the volatile memory 106 comprises write data and metadata. The write data may comprise data associated with write requests. The metadata may comprise an address mapping of the write data. For example, the address mapping may comprise a logical address to physical address mapping. When the data is requested, it may be requested by logical address. The metadata may be consulted to determine the physical address associated with the logical address in the request, and the requested data may be retrieved from the physical address. The metadata may be stored contiguously, i.e. in a sequential set of addresses, and the write data may be stored contiguously as well (in a separate set of sequential addresses). In at least one example, the size of these contiguous blocks of data may be based on a page size of the non-volatile memory 108, For example, a page size of non-volatile memory 108 may be 64 kilobytes. As such, metadata and write data may be accumulated in volatile memory 106 in their respective contiguous blocks until the threshold of 64 kilobytes of combined data is reached. Because metadata may be smaller than write data, 4 kilobytes of the 64 kilobytes may comprise metadata while 60 kilobytes of the 64 kilobytes may comprise write data. In various examples, other ratios may occur.
In another example, the page size of the non-volatile memory 108 may be 128 kilobytes. As such, metadata and write data may be accumulated in volatile memory 106 until the threshold of 128 kilobytes of combined data is reached. Because metadata may be smaller than write data, 8 kilobytes of the 128 kilobytes may comprise metadata while 120 kilobytes of the 128 kilobytes may comprise write data. In various examples, other ratios may occur.
In at least one example, the metadata block is stored before (at lower numbered addresses) the write data block in volatile memory 106. As such, when the combined data is committed to non-volatile memory 108, metadata will appear at the beginning (at lower numbered addresses) of each page of the non-volatile memory 108. In another example, the metadata is placed after the write data. As such, the metadata will appear at the end of each page of non-volatile memory 108.
Once the threshold amount of data has been accumulated and prearranged in volatile memory 106, the data may be committed to non-volatile memory 108 as prearranged. The data may be committed in a single write operation, In at least one example, the threshold is a variable. That is, the amount of data accumulated that triggers storage to non-volatile memory is not constant. Rather, it changes based on whether further data would cause the size of the prearranged data to exceed a page size of the non-volatile memory. For example, the write requests may be prearranged in the order they were received; as such, the oldest write request associated with data that has not already been prearranged is next for prearrangement. If the next write request is associated with data that would cause the prearranged data to exceed a, e.g., 64 kilobyte page size of non-volatile memory 108, then the already prearranged data is committed to non-volatile memory 108, and the data associated with the next write request is used as the first accumulation to be committed to the next page of non-volatile memory 108. In this way, the page size of the non-volatile memory 108 may be approached or equaled by the size of the prearranged data, but not exceeded in at least some examples.
In at least one example, an amount of volatile memory needed for prearranging the data is calculated based on a rate at which write requests are received and a speed at which data can be committed to the non-volatile memory. For example, if an average of 4 kilobytes of data are stored in volatile memory 106 for each write request, the total amount of memory that should be stored over a period of time can be calculated if the frequency of the write requests are known. Also, if data is committed slower than that frequency, the amount of buffer space needed may be calculated. This amount of buffer space can be divided into regions equal to the page size of non-volatile memory, and these regions may be used as a circular queue. That is, once a region has been committed to non-volatile memory, that region may be placed at the end of a queue and may be overwritten when the region reaches the front of the queue. In at least one example, committing a region of data to non-volatile memory 108 may be performed simultaneously with prearranging the next regions in the queue.
The hybrid memory module 104 may also comprise a power sensor in at least one example. The power sensor may comprise logic that detects an imminent or occurring power failure and consequently triggers a backup of volatile memory 106 to non-volatile memory 108 or a check to ensure that non-volatile memory 108 is already backing up or has already backed up volatile memory 106. For example, the power sensor may be coupled to a power supply or charging capacitor coupled to the hybrid memory module 104. If the supplied power falls below a threshold, the backup may be triggered. In this way, the data in volatile memory 106 may be protected during a power failure.
The hybrid memory module 104 and volatile memory 106 may act as a cache in at least one example. For example, should data be requested that has not yet been committed to non-volatile memory 108, the volatile memory 106 may be accessed to retrieve the requested data. In this way, an inventory of data may be maintained with data being marked stale or not stale, much like a cache.
In another example, the page size of the non-volatile memory 108 may be 128 kilobytes. As such, metadata and write data may be accumulated in volatile memory 106 until the threshold of 128 kilobytes of combined data is reached. Because metadata may be smaller than write data, 8 kilobytes of the 128 kilobytes may comprise metadata while 120 kilobytes of the 128 kilobytes may comprise write data. In various examples, other ratios may occur.
In at least one example, the metadata block is stored before (at lower numbered addresses) the write data block in volatile memory 106. As such, when the combined data is committed to non-volatile memory 108, metadata will appear at the beginning (at lower numbered addresses) of each page of the non-volatile memory 108. In another example, the metadata is placed after the write data. As such, the metadata will appear at the end of each page of non-volatile memory 108.
At 206, the data may be committed to non-volatile memory 108 as prearranged. The data may be committed in a single write operation. In at least one example, the threshold is a variable. That is, the amount of data accumulated that triggers storage to non-volatile memory is not constant. Rather, it changes based on whether further data would cause the size of the prearranged data to exceed a page size of the non-volatile memory. For example, the write requests may be prearranged in the order they were received; as such, the oldest write request associated with data that has not already been prearranged is next for prearrangement. If the next write request is associated with data that would cause the prearranged data to exceed a, e.g., 64 kilobyte page size of non-volatile memory 108, then the already prearranged data is committed to non-volatile memory 108, and the data associated with the next write request is used as the first accumulation to be committed to the next page of non-volatile memory 108. In this way, the page size of the non-volatile memory 108 may be approached or equaled by the size of the prearranged data, but not exceeded in at least some examples.
In at least one example, an amount of volatile memory needed for prearranging the data is calculated based on a rate at which write requests are received and a speed at which data can be committed to the non-volatile memory. For example, if an average of 4 kilobytes of data are stored in volatile memory 106 for each write request, the total amount of memory that should be stored over a period of time can be calculated if the frequency of the write requests are known. Also, if data is committed slower than that frequency, the amount of buffer space needed may be calculated. This amount of buffer space can be divided into regions equal to the page size of non-volatile memory, and these regions may be used as a circular queue. That is, once a region has been committed to non-volatile memory, that region may be placed at the end of a queue and may be overwritten when the region reaches the front of the queue. In at least one example, committing a region of data to non-volatile memory 108 may be performed simultaneously with prearranging the next regions in the queue.
In DRAM 306, each byte may be individually addressed. However, in flash memory 308, data is accessed in pages. That is, in order to read a byte of data, the page of data in which the byte is located should be loaded. Similarly, in order to write a byte of data, the page of data in which the byte should be written should be loaded. As such, it is economical to write entire pages of flash memory 308 together in one write operation. Specifically, the number of accesses to the page may be reduced resulting in reduced input/output wear of the flash memory 308. Furthermore, in at least one example, a program or operating system may only be compatible with DRAM 306 and therefore attempt to address individual bytes in the flash memory 308. In such a scenario, the prearranging of data may help the flash memory 308 be compatible with such programs or operating systems by allowing for the illusion of byte-addressability of flash memory 308.
The DRAM 306 may act as a staging area for the flash memory 308. That is, data may be prearranged, or ordered, in the DRAM 306 before being stored in the flash memory 308 in the same arrangement or order In at least one example, the data prearranged in the DRAM 306 comprises write data and metadata. The write data may comprise data associated with write requests. The metadata may comprise an address mapping of the write data. For example, the address mapping may comprise a logical address to physical address mapping. When the data is requested, it may be requested by logical address. The metadata may be consulted to determine the physical address associated with the logical address in the request, and the requested data may be retrieved from the physical address. The metadata may be stored contiguously, i.e. in a sequential set of addresses, and the write data may be stored contiguously as well (in a separate set of sequential addresses). In at least one example, the size of these contiguous blocks of data may be based on a page size of the flash memory 308. For example, a page size of flash memory 308 may be 64 kilobytes. As such, metadata and write data may be accumulated in DRAM 306 in their respective contiguous blocks until the threshold of 64 kilobytes of combined data is reached. Because metadata may be smaller than write data, 4 kilobytes of the 64 kilobytes may comprise metadata while 60 kilobytes of the 64 kilobytes may comprise write data. In various examples, other ratios may occur.
In another example, the page size of the flash memory 308 may be 128 kilobytes. As such, metadata and write data may be accumulated in DRAM 306 until the threshold of 128 kilobytes of combined data is reached. Because metadata may be smaller than write data, 8 kilobytes of the 128 kilobytes may comprise metadata while 120 kilobytes of the 128 kilobytes may comprise write data. In various examples, other ratios may occur.
In at least one example, the metadata block is stored before (at lower numbered addresses) the write data block in DRAM 306. As such, when the combined data is committed to flash memory 308, metadata will appear at the beginning (at lower numbered addresses) of each page of the flash memory 308. In another example, the metadata is placed after the write data. As such, the metadata will appear at the end of each page of flash memory 308.
Once the threshold amount of data has been accumulated and prearranged in DRAM 306, the data may be committed to flash memory 308 as prearranged. The data may be committed in a single write operation, In at least one example, the threshold is a variable. That is, the amount of data accumulated that triggers storage to flash memory 308 is not constant. Rather, it changes based on whether further data would cause the size of the prearranged data to exceed a page size of the flash memory 308. For example, the write requests may be prearranged in the order they were received; as such, the oldest write request associated with data that has not already been prearranged is next for prearrangement. If the next write request is associated with data that would cause the prearranged data to exceed a, e.g., 64 kilobyte page size of flash memory 308, then the already prearranged data is committed to flash memory 308, and the data associated with the next write request is used as the first accumulation to be committed to the next page of flash memory 308. In this way, the page size of the flash memory 308 may be approached or equaled by the size of the prearranged data, but not exceeded in at least some examples.
In at least one example, an amount of DRAM 306 needed for prearranging the data is calculated based on a rate at which write requests are received and a speed at which data can be committed to the flash memory 308. For example, if an average of 4 kilobytes of data are stored in DRAM 306 for each write request, the total amount of memory that should be stored over a period of time can be calculated if the frequency of the write requests are known. Also, if data is committed slower than that frequency, the amount of buffer space needed may be calculated. This amount of buffer space can be divided into regions equal to the page size of flash memory 308, and these regions may be used as a circular queue. That is, once a region has been committed to flash memory 308, that region may be placed at the end of a queue and may be overwritten when the region reaches the front of the queue. In at least one example, committing a region of data to flash memory 308 may be performed simultaneously with prearranging the next regions in the queue.
The system described above may be implemented on any particular machine or computer with sufficient processing power, memory resources, and throughput capability to handle the necessary workload placed upon the computer.
In various embodiments, the storage 488 comprises a non-transitory storage device such as volatile memory (e.g., RAM), nonvolatile storage (e.g., Flash memory, hard disk drive, CD ROM, etc.), or combinations thereof. The storage 488 comprises computer-readable software 484 that is executed by the processor 482. One or more of the actions described herein are performed by the processor 482 during execution of the software 484.
The above discussion is meant to be illustrative of the principles and various embodiments of the present invention. Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US2012/035913 | 5/1/2012 | WO | 00 | 6/25/2014 |