The invention generally relates to storage devices and more particularly to a method for improving memory lifetime using data swapping.
A storage device specifically random-access memory (RAM) is one of the costly components of embedded devices. Hence, efficient management of memory and saving storage space in the RAM is an important requirement for the embedded devices. There are different techniques to save the storage space in RAM. On demand swapping of code and data is one amongst the known techniques to save the storage space in RAM. With demand swapping of code and data, only part of the code and data is stored in the RAM. The complete code and data is stored in a secondary memory of the embedded devices. NAND, NOR, eMMC, eSD, SSD, UFS or their variants are most common secondary memory used in the embedded devices. A conventional method of demand swapping is illustrated in
Both the secondary memory and the RAM are split into pages. Typically, the complete code and data stored in the secondary memory is called virtual space and the virtual space is split into virtual pages. The pages stored in the RAM are called physical pages. A memory management unit (MMU) of the processor maintains a mapping table containing information of the virtual page and the physical page. The mapping table is known as MMU table. The system starts with minimum number of code and data pages in RAM. As and when, application accesses a page, which is not there in RAM; the processor raises a page fault exception. A swap manager, which is also known as a page fault exception handler, or a paging manager, swaps in the required page from the secondary memory. During the process of swapping in, the swap manager may swap out an old data page from the RAM to create space for the swap-in page. The swapped out old data pages are required to be saved in the secondary memory so that the modified data pages can be read as and when required.
All the memory devices have a lifetime on number of times it can be written. Considering a single level cell (SLC) NAND with backup memory of 2 blocks (128 pages), virtual space of 1 page, SLC NAND lifetime of 100000 program/erase cycles and a page fault occurs every 1 second then it could trigger a write to NAND every 1 second. If we distribute the writes across two blocks then this would mean that the NAND lifetime is 148 days. This will not be acceptable for any embedded device.
According to one known solution of the above stated problem, additional space is reserved in the secondary memory to increase lifetime of the memory device. For example, in the above NAND example, if four blocks are used then the lifetime of NAND becomes 296 days; if eight blocks are used then lifetime of NAND becomes 592 days. However, the disadvantage of this solution is that it reduces the available memory capacity for user.
According to another known solution, the processor has additional capability to mark a page as dirty, if the page is modified. Then, during swapping out a page, the swap manager checks the MMU table and writes only the data pages, which are dirty to secondary memory. This reduces the number of pages, which are written to secondary memory, and improves lifetime. However, the MMU capability of the processor to identify the dirty page is expensive. Popular ARM cores like ARM7/ARM9's MMU do not have capability to mark the dirty pages. Also the dirty pages marked in the processor MMU table do not necessarily mean that the page is used by an application.
According to yet another known solution of the problem, all the physical pages, which are present in RAM, are marked read only in the MMU table. When a page fault occurs then the swap manager checks whether the access requested is write and the page is marked read only. Then, such pages are marked as dirty and read-only permission from the MMU table is removed. When swapping out pages, the swap manager writes only the dirty data pages to the secondary memory. The disadvantage of this solution is that there will be page faults raised for the non-dirty pages even when page is present in the RAM. Also the dirty pages identified in this method do not necessarily mean that the page is used by an application. This adds overhead in the overall performance of the embedded system.
Hence, there is a well-felt need for a system and method for effectively reducing the number of writes while swapping in or out data from memory.
The subject matter disclosed herein is not limited to embodiments that solve any disadvantages or that operate only in environments such as those described above. Rather, this background is only provided to illustrate one exemplary technology area where some embodiments described herein may be practiced.
A memory management system for managing memory of a processing system having a primary memory and a secondary memory unit is disclosed. The system can also have more secondary memory units than one. The memory is managed by reducing the number of writes required by swapping one or more relevant pages of an application from the primary memory to the at least one secondary memory. The system comprises a dynamic memory manager for allocating memory to an application from a memory pool. The dynamic memory manager has a first table containing virtual addresses of allocated memory and sizes of each chunk of allocated memory. The system also comprises a swap manager having a second table containing physical addresses of pages of the primary memory and an indication of whether these pages are allocated to an application. The swap manager is configured to read and compare primary memory pages with corresponding pages in the secondary memory to determine a change in the primary memory pages, and to update the corresponding secondary memory in case a change is determined. The second table contains the physical addresses of the one or more physical pages in primary memory used by the application and contains information on whether the physical page is allocated or not. In some embodiments, the system further comprises a memory management unit having a third table containing a mapping information of the physical addresses and the virtual addresses of the one or more physical pages used by the application and in some embodiments also information about whether a specific page is dirty or modified.
A method of controlling memory in a processing system is disclosed. The processing system comprises a primary memory and a secondary memory. The method comprises allocating memory to an application from a memory pool, based on a first table containing virtual addresses of allocated memory and sizes of each chunk of allocated memory; and reading and comparing primary memory pages with corresponding pages in the secondary memory to determine a change in the primary memory pages, and updating the corresponding secondary memory in case a change is determined, based on a second table containing physical addresses of pages of the primary memory and an indication of whether these pages are allocated to an application. In some embodiments, the method further comprises of swapping the relevant pages from the primary memory to the secondary memory and discarding non-relevant pages thereby improving memory lifetime by reducing the number of writes required by swapping one or more relevant pages from primary memory to the secondary memory.
The features of the present invention will become better understood when the following description is read with reference to the accompanying drawings, wherein
A method of optimizing memory in a processing system is disclosed. According to an aspect, the memory is managed by swapping one or more relevant pages from a primary memory to a secondary memory and discarding the non-relevant pages thereby improving storage lifetime by reducing the number of writes required by swapping one or more relevant pages from primary memory to the secondary memory. According to an embodiment the primary memory corresponds to RAM.
The memory management system 100 may comprise of a dynamic memory manager 106, a swap manager 110 and a memory management unit 116.
The dynamic memory manager 106 is configured for allocating memory to an application from a memory pool trying to access one or more physical pages. The dynamic memory manager 106 may comprise of a first table 108 containing virtual addresses and size of each memory chunks currently used by the application.
The swap manager 110 may comprise of a second table 114 containing the physical addresses of the physical pages in primary memory and the information whether the page is allocated. The swap manager 110 may further comprise of a swap processor 112 configured to swap data between the primary memory 102 and the secondary memory 104.
The memory management unit 116 may comprise of a third table 118 containing a mapping information of the physical addresses and the virtual addresses of the one or more virtual pages used by the application. The table 118 may also store information such as whether the page is dirty or modified by any application.
According to an embodiment, the swap manager 110 is further configured to update information from the dynamic memory manager 106.
According to an embodiment, the relevant pages are the pages that are allocated by the application in the primary memory 102. The system 100 is further configured to update the same physical page in the secondary memory 104 or store a fresh copy of the updated physical page in the secondary memory 104 thereby deleting the old existing copy of the physical page. As the physical pages are transferred out to the secondary memory 104, the memory space in the primary memory 102 is freed up.
According to yet another embodiment, the step of identifying allocated modified pages 206 includes using the memory management unit table 118 or reading the pages from secondary memory 104 and comparing with that stored in primary memory 102.
According to yet another an embodiment, the step of swapping 208 includes transferring only the allocated modified pages to the secondary memory 104 and updating the memory management table 118.
According to yet another an embodiment, the step of swapping 208 also includes recording modifications in the secondary memory 104 made by the application while accessing the physical page in the primary memory 102.
The Swap manager 110 maintains a second table 114 for every physical page addresses, which is currently in the RAM, and whether the page is allocated or not. According to this embodiment, a page may be classified as a allocated physical page if it is still allocated by the application accessing the physical page. If the memory page is dirty then the swap manager 110 writes the memory page from the RAM (primary memory 102) to the secondary memory 104 when swapping the page out. Each virtual address chunk, {{V1, V1+X1}, {V2, V2+X2}, {V3, V3+X3} . . . {Vn, Vn+Xn}}, may contain one or more physical pages completely within it. For example, in
It is to be noted that the virtual address may not start at a physical page boundary. It also may not be of multiple page size. For the virtual address {virtual_addr, virtual_addr+chunk_size}, the physical pages {start_addr, number_of_page} which are completely within the virtual address chunk can be found with a procedure such as find_page_and_address as defined by Pseudo code for finding start page and size as illustrated below.
a. Freeing the memory chunk into free memory pool managed by dynamic memory manager 106;
b. Find the start address and number of pages which are completely within the memory chunk using find_page_and_address; and
c. Notifying the swap manager 110 the start page address and number of pages and the pages as non-allocated.
a. Allocating the memory chunk from free memory pool managed by dynamic memory manager 106;
b. Finding the start address and number of pages which are completely within the memory chunk using find_page_and_address; and
c. Notifying the swap manager 110 the start page address and number of pages and the pages as allocated.
It is an advantage of the invention that physical pages which are not required, to be saved in a secondary memory are identified, whereby the technique drops the number of write to the secondary memory. This also helps in both improving the secondary memory lifetime and reducing the physical page miss treatment time for data.
While specific language has been used to describe the invention, any limitations arising on account of the same are not intended. As would be apparent to a person in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein.
Number | Date | Country | Kind |
---|---|---|---|
2531/DEL/2010 | Oct 2010 | IN | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP2011/068568 | 10/24/2011 | WO | 00 | 5/21/2013 |