Systems and methods for improving garbage collection and wear leveling performance in data storage systems

Information

  • Patent Grant
  • 10417123
  • Patent Number
    10,417,123
  • Date Filed
    Monday, September 16, 2013
    11 years ago
  • Date Issued
    Tuesday, September 17, 2019
    5 years ago
Abstract
Disclosed embodiments are directed to systems and methods for improving garbage collection and wear leveling performance in data storage systems. The embodiments can improve the efficiency of static wear leveling by picking the best candidate block for static wear leveling and/or postponing static wear leveling on certain candidate blocks. In one embodiment, one or more source blocks for a static wear leveling operation are selected based at least on whether the one or more blocks have a low P/E count and contain static data, such as data that has been garbage collected.
Description
BACKGROUND
Technical Field

This disclosure relates to data storage systems. In particular, this disclosure relates to systems and methods for improving garbage collection and wear leveling performance in data storage systems.


Description of Related Art

Data storage systems execute many operations in the course of their normal operation. For example, data storage systems execute read and write commands requested by a host system or internal operations, such as garbage collection and wear leveling. Some internal operations may require a large number of resources for execution. Accordingly, there is a need to improve execution of internal operations.





BRIEF DESCRIPTION OF THE DRAWINGS

Systems and methods which embody the various features of the invention will now be described with reference to the following drawings, in which:



FIG. 1 illustrates a combination of a host system and a data storage system that implements garbage collection and/or static wear leveling according to one embodiment of the invention.



FIG. 2 is a graph that illustrates counts of blocks of a solid-state storage system relative to the P/E counts of the blocks according to one embodiment of the invention.



FIG. 3 is a graph that illustrates conditions for performing static wear leveling according to one embodiment of the invention.



FIG. 4 is a graph that illustrates conditions for performing static wear leveling according to one embodiment of the invention.



FIG. 5 is a flow diagram illustrating a process for completing a garbage collection operation according to one embodiment of the invention.





DETAILED DESCRIPTION

While certain embodiments are described, these embodiments are presented by way of example only, and are not intended to limit the scope of the disclosure. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms. Furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the disclosure.


Overview


Data storage systems can execute host commands and internal operations in the course of their normal operation. For example, garbage collection may be performed on memory blocks that contain both valid and invalid data. When a memory block is selected for garbage collection, the garbage collection operation copies valid data within the memory block to a new location in memory and then erases the entire memory block, making the entire block available for future data writes. Therefore, the amount of memory freed by the garbage collection process depends on the amount of invalid pages within the memory blocks selected for garbage collection.


In addition, static wear leveling, which can be considered a sub-part or special case of garbage collection, can be used in solid-state storage systems to prolong their lifecycle. A wear leveling operation may involve moving data content in a first block of memory to a second block of memory that has reached a certain erase level or count because of a recent erase operation. The first block of memory, which has a lower P/E level or count than that of the second memory unit, is then erased and made available for a future write operation. This has the effect of directing future wear toward the less worn first memory unit, and thus the overall process of wear leveling ensures that erase operations are evenly spread across blocks of memory in a solid-state storage system. Since each erase operation increases the wear of a block by incrementally reducing the block's ability to properly retain data content, static wear leveling helps prevent certain blocks of memory from receiving an excessive amount of erase operations relative to other blocks and thus experiencing data failures much earlier than other blocks.


Static wear leveling however can be an expensive internal memory activity which frees a block with a low program/erase (P/E) count, but does not free new space. The goal for some static wear leveling algorithms is to keep the P/E count of all the blocks in a solid-state memory within a window, sometimes referred to as a P/E window. For example, the P/E count of a least worn block should be kept within a certain number of the P/E count of the most worn block. This is usually done by picking the one or more blocks with the minimum P/E count to go through static wear leveling.


The efficiency of static wear leveling can be measured in one or more ways. One efficiency metric can be a P/E count difference between a selected block and a destination block to which data is being relocated. A higher P/E count difference may indicate a better efficiency since this can mean that likely infrequently overwritten data stored in a selected block is relocated to a destination block with a relatively higher P/E count. Another efficiency metric can be the data age of data stored in a selected block. In one embodiment, a block is selected for static wear leveling when there is an indication that data stored to the block is infrequently overwritten by a host system, meaning that the selected block stores data with a relatively higher data age. A relatively higher data age thus can provide a useful indication that if the data stored in the selected block is relocated to a destination block having a relatively higher P/E count, the relocated data is not likely to be overwritten by the host system in the near future.


Some embodiments of this disclosure are directed to systems and methods for improving garbage collection and wear leveling performance. Some embodiments improve the efficiency of static wear leveling by: (1) picking the best candidate block for static wear leveling, and/or (2) postponing static wear leveling on certain candidate blocks. For instance, a block that has a relatively higher P/E count, but is less likely to be overwritten by a host system in the short-term, can be selected as a candidate for static wear leveling operation rather than a block having a relatively lower P/E count but which is more likely to be overwritten in the short-term. In addition, by postponing static wear leveling on candidate blocks containing data that is likely be overwritten by a host system in the short-term, the need to invalidate and relocate data can be eliminated in some cases.


Some embodiments of this disclosure are further directed to measuring the data age of data and identifying static or dynamic data based on the data age. In this disclosure, data that is or may be frequently overwritten by a host system can be referred to as “dynamic” and/or “hot” data (e.g., data that has been recently written), and data that is or may be infrequently overwritten can be referred to as “static” and/or “cold” data (e.g., data that has been garbage collected). Whether data is dynamic or static can provide an indication of when the data may likely be overwritten by the host system, and accordingly which candidate block to select for a static wear leveling operation and whether the static wear leveling may be beneficially postponed on a candidate block.


System Overview



FIG. 1 illustrates a combination 100 of a host system 110 and a storage system 120 that implements garbage collection and/or static wear leveling according to one embodiment of the invention. As is shown, a storage system 120 (e.g., hybrid hard drive, solid state drive, etc.) includes a controller 130 and a non-volatile memory array 140, which comprises one or more blocks of storage, identified as Block “A” (142) through Block “N”. Each block comprises a plurality of flash pages (F-pages). For example, Block A (142) of FIG. 1 includes a plurality of F-pages, identified as F-Pages A (143), B, through N. In some embodiments, each “block” is a smallest grouping of memory pages or locations of the non-volatile memory array 140 that are erasable in a single operation or as a unit, and each “F-page” or “page” is a smallest grouping of memory cells that can be programmed in a single operation or as a unit. Other embodiments may use blocks and pages that are defined differently.


The non-volatile memory array 140 may comprise an array of non-volatile memory, such as flash integrated circuits, Chalcogenide RAM (C-RAM), Phase Change Memory (PC-RAM or PRAM), Programmable Metallization Cell RAM (PMC-RAM or PMCm), Ovonic Unified Memory (OUM), Resistance RAM (RRAM), NAND memory (e.g., single-level cell (SLC) memory, multi-level cell (MLC) memory, or any combination thereof), NOR memory, EEPROM, Ferroelectric Memory (FeRAM), Magnetoresistive RAM (MRAM), other discrete NVM (non-volatile memory) chips, or any combination thereof. In some embodiments, the data storage system 120 can further comprise other types of storage, such as one or more magnetic media storage modules or other types of storage modules. Moreover, although embodiments of this disclosure may be described in the context of non-volatile memory arrays, the systems and methods of this disclosure can also be useful in other storage systems like hard drives, shingled disk drives, and hybrid disk drives that may have both solid-state storage and magnetic storage components. As such, while certain internal operations are referred to which are typically associated with solid-state storage systems (e.g., “wear leveling” and “garbage collection”) analogous operations for other storage systems can also take advantage of this disclosure.


The controller 130 can be configured to receive data and/or storage access commands from a storage interface module 112 (e.g., a device driver) of the host system 110. Storage access commands communicated by the storage interface module 112 can include write data and read data commands issued by the host system 110. Read and write commands can specify a logical address (e.g., logical block addresses or LBAs) used to access the data storage system 120. The controller 130 can execute the received commands in the non-volatile memory array 140.


Data storage system 120 can store data communicated by the host system 110. In other words, the data storage system 120 can act as memory storage for the host system 110. To facilitate this function, the controller 130 can implement a logical interface. The logical interface can present to the host system 110 data storage system memory as a set of logical addresses (e.g., contiguous address) where user data can be stored. Internally, the controller 130 can map logical addresses to various physical locations or addresses in the non-volatile memory array 140 and/or other storage modules.


The controller 130 includes a garbage collection/wear leveling module 132 configured to perform garbage collection and wear leveling. As used herein, a static wear leveling operation can be considered a sub-part of, or a special case of, an overall garbage collection operation. In some embodiments, the garbage collection/static wear leveling module 132 performs solely static wear leveling while, in other embodiments, performs garbage collection and/or static wear leveling of at least a portion of the non-volatile memory array 140. In one embodiment, the garbage collection/wear leveling module 132 may prevent abnormal increases or spikes in write amplification while performing static wear leveling using the approaches described in this disclosure.


In one embodiment, the garbage collection/static wear leveling module 132 can select blocks of the non-volatile memory array 140 on which garbage collection and/or static wear leveling is performed. Such block picking functionality may be performed based at least in part on information related to data age and/or wear leveling. The blocks may be picked in a way that increases the amount of free space through the life of the data storage system 120 and promotes or guarantees that blocks stay within a range of P/E counts, which may maximize the data storage life of the non-volatile memory array 140.


Data Age


The garbage collection/static wear leveling module 132 and/or the controller 130 can determine or estimate the data age of data stored in the non-volatile memory array 140 based at least on when the controller 130 wrote the data to the non-volatile memory array 140 (e.g., according to instructions from the host system 110). In one embodiment, when the controller 130 receives a command to write data, the controller 130 can execute the write command in one or more blocks of the non-volatile memory array 140. Upon successful execution of the write command, the newly written data can be associated with or assigned a lowest data age value, such as a data age of 0. Subsequently, the data age of this data may increase over time until the controller 130 executes a command from the host system 110 to erase this data. Internal memory operations (e.g., garbage collection or static wear leveling) by the storage system 120 may not reset the data age associated with the data in some implementations.


In one embodiment, when the controller 130 writes a block with data from the host system 110, a timestamp is stored as a block attribute of the block or saved to an area of the memory reserved for system information. The timestamp can be stored relative to, for instance, a counter maintained by the controller 130, such as a counter that counts a power on time since manufacturing for the storage system 120. The counter can have a resolution of one or more seconds or a fraction of a second. Based on this counter, the data age of the data stored in blocks of the non-volatile memory array 140 can be determined using Equation 1:

DataAgeBlock=TimeNow−TimeStampBlock  (Equation 1)

where TimeNow corresponds to a time when the data age of the data stored to a block is determined, TimeStampBlock corresponds to the time indicated by the timestamp for the block, and DataAgeBlock corresponds to the data age of the stored data.


Additionally, the data ages can be truncated, rounded, or normalized (e.g., to a value in a range of 0 to 1) in some implementations to facilitate easier processing or storage of the data ages. For example, the timestamp can be normalized relative to a maximum data age, according to Equation 2:










RelativeAge
Block

=


DataAge
Block

MaximumAge





(

Equation





2

)








where DataAgeBlock corresponds to an absolute data age of the data stored to a block, MaximumAge corresponds to a maximum data age normalizing value, and RelativeAgeBlock corresponds to the relative data age of the data of the block. The maximum data age, in one embodiment, can equal either (1) an oldest data age of the data stored in the non-volatile memory array 140 or (2) a value proportional to a storage size of the non-volatile memory array 140 (e.g., a value proportional to a write time for filling the non-volatile memory array 140 with data). The maximum data age further may be determined according to Equation 3:

MaximumAge=Min(N×DriveFillTime,DataAgemax)  (Equation 3)

where DriveFillTime corresponds to a time to fill the non-volatile memory array 140 with data, N corresponds to a multiplier controllable to scale DriveFillTime, and DataAgemax corresponds to a maximum absolute data age of data stored to the non-volatile memory array 140. In one embodiment, the value of N can be determined according to Equation 4:









N
=

MaxDeltaPE
2





(

Equation





4

)








where MaxDeltaPE corresponds to a size of a P/E window for the non-volatile memory array 140.


The garbage collection/wear leveling module 132 can issue a garbage collection command that involves multiple blocks of the non-volatile memory array 140 in a single operation. For instance, in one garbage collection operation, the garbage collection/wear leveling module 132 can issue a command for the controller 130 to write valid data stored in two or more blocks to a single free block. Since each of the two or more garbage collected blocks may have a different data age associated with the valid data stored in the block, the garbage collection/wear leveling module 132 may further determine an assigned data age for the valid data written to the single block. In one embodiment, a highest, median, average, or lowest data age associated with the garbage collected data can be assigned to the single block. In another embodiment, a weighted average of the data ages can be assigned using Equation 5:










DataAge
ColdBlock

=




i
=
1

N





ValidCount
i

×

DataAge
i


N






(

Equation





5

)








where ValidCounti corresponds to an amount of valid data in an itb block, DataAgei corresponds to the data age of the data stored in the ith block, N corresponds to the number of garbage collected blocks, and DataAgeColdBlock corresponds to the assigned data age for the single block.


The data age assigned to data can be used to classify the data age relative to data age ranges. That is, when the data age of data stored in a block falls within one of the multiple data age ranges, the data stored in the block can be classified as within or part of the data age range. For example, data can be classified as within a relatively low data age range, a medium data age range, and a relatively high data age range in one implementation. The multiple data age ranges can be separated by thresholds usable to determine whether the data age of particular data is within a certain data age range. In one embodiment, the multiple data age ranges include a static data age range and a dynamic data age range, and the static and dynamic data age ranges are separated by a static-dynamic data threshold. For instance, data having a relative data age meeting a static-dynamic data threshold (e.g., equal to a relative data age of 0.8, 0.9, or 1.0) can be classified as static data while data having a relative data age not meeting the static-dynamic data threshold can be classified as dynamic data.


Static Wear Leveling



FIG. 2 is a graph 200 that illustrates counts of blocks of a non-volatile memory array relative to the P/E counts of the blocks according to one embodiment of the invention. The histogram line 202 on the graph 200 shows the P/E count distribution of non-volatile memory array blocks, beginning at a lowest P/E count defined as the P/E window start 204. The P/E count of the non-volatile memory array block having a highest P/E count is denoted by Open Block P/E 206. The P/E count difference between the Open Block P/E 206 and the P/E window start 204 can be defined as Open Block ΔP/E 208. A P/E window end 210 may be defined relative to the P/E window start 204 and equal the sum of the count at the P/E window start 204 and a P/E count difference or window size (i.e., Max ΔP/E 212) of a P/E window. In one embodiment, the P/E window size can be an ideal distance between the P/E count of the least worn block(s) and the most worn block(s) in the non-volatile memory array. When one or more blocks have P/E counts outside of the P/E window and in the shaded area 214, data storage systems like the data storage system 120 may consider or take one or more actions, such as corrective actions like performing a garbage collection or static wear leveling operation using the one or more blocks with P/E counts outside of the P/E window.



FIG. 3 is a graph 300 that illustrates conditions for performing static wear leveling according to one embodiment of the invention. The graph 300 charts a distribution of blocks, such as blocks 302, 304, 306, and 308, relative to the data age of data stored in the blocks (shown on a relative data age scale from 0 to 1) and the P/E count of the blocks. The blocks having a data age that meets the static/dynamic data threshold 310 (e.g., blocks 304 and 306) can be considered to store static data while the blocks having a data age that does not meet the static/dynamic data threshold 310 (e.g., block 302) can be considered to store dynamic data. As described in this disclosure, the data age of the data can be determined or estimated relative to when the data was written to a memory based on instructions from a host system.


In one embodiment, the garbage collection/static wear leveling module 132 initiates a static wear leveling operation when a P/E count for a destination block like the current destination block 308 (e.g., an open cold block) is high, such as when the P/E count of the current destination block 308 exceeds the P/E window end 312. The garbage collection/static wear leveling module 132 may then select one or more source blocks having a lower P/E count than the P/E count of the current destination block 308 and relocate the data from the selected source block(s) to the current destination block 308. For example, the blocks 302 and 304 may be selected as the source blocks, and thus the data stored in the blocks 302 and 304 can be moved to the current destination block 308.


The garbage collection/static wear leveling module 132 can in addition select the source block(s) for the static wear leveling operation based at least on whether the one or more source blocks have a low P/E count and/or contain static data. As illustrated by FIG. 3, the source block(s) may be selected from the static block pick area below the P/E threshold 314 and/or from the area above the static/dynamic data threshold 310. For instance, the garbage collection/static wear leveling module 132 may select one or more source blocks that contain static data having the oldest data age from the blocks which have a P/E count below the P/E threshold 314 (e.g., block 306 may be selected). In one implementation, the garbage collection/static wear leveling module 132 selects a block for static wear leveling according to the following algorithm:

    • Search the static block pick area for one or more blocks;
    • Find the block having a maximum data age;
    • IF the data age of the block having the maximum data age meets the static/dynamic data threshold
      • Pick the block having the maximum data age for wear leveling;
    • ELSE
    • Perform garbage collection using a different or default approach.


Advantageously, in one embodiment, selecting one or more source blocks according to whether the one or more source blocks have a low P/E count and/or contain static data may prevent unnecessary or inefficient static wear leveling by deciding if there is a sufficient benefit for performing static wear leveling using the source block(s). For example, if there may be no source block with static data among the blocks having a low P/E count (e.g., no blocks storing static data meeting the static/dynamic data threshold 310), then the garbage collection/static wear leveling module 132 may determine that there is insufficient benefit to perform static wear leveling using the blocks. This may be because those blocks are likely to be programmed or erased in the near future due to new host data writes. The P/E counts of those blocks thus may likely increase in the near future and decrease the P/E window size, thereby accomplishing a goal of static wear leveling without performing a potentially duplicative and extraneous static wear leveling operation. Therefore, in some embodiments, the garbage collection/static wear leveling module 132 performs static wear leveling when static data is found in lower P/E count blocks, and not in some instances when dynamic data is found in lower P/E count blocks.


Because static data can be written to the non-volatile memory array 140 at any time, it may take a long time until particular data is determined to be static. This can be in spite of the possibility that the data age of certain blocks in the static block area may eventually grow to pass the static/dynamic data threshold. Accordingly, at certain times, the blocks in static block pick area can appear to contain solely dynamic data, and static wear leveling may not to take place because no block may qualify to be used to move static data. The P/E count of some blocks of the non-volatile memory array 140 thus may continue to increase well beyond the P/E window end.


To mitigate this condition, the P/E count of a current destination block 402 can be used as a feedback to adjust the static/dynamic data threshold as illustrated in the graph 400 of FIG. 4. For example, a comparison between the P/E count of the current destination block 402 and the P/E window end 404 can be used to lower the static/dynamic data threshold from an initial static/dynamic data threshold 406 to an adjusted static/dynamic data threshold 408. In one implementation, the data age level for the static/dynamic data threshold is determined as using Equation 6:









Threshold
=

α
-

β
×

(



OpenBlock





Δ






P
/
E


-

Max





Δ






P
/
E




Max





Δ






P
/
E



)







(

Equation





6

)








where OpenBlock ΔP/E corresponds to a P/E count difference between the P/E count of the current destination block 402 and the P/E window start (see, e.g., FIG. 2), Max ΔP/E corresponds to a P/E count difference between the P/E count of the P/E window end and the P/E window start (see, e.g., FIG. 2), α corresponds to an initial static/dynamic data threshold, and β corresponds to a factor usable to control the rate of adjustment of the static/dynamic data threshold relative to the P/E count difference between the P/E count of the current destination block 402 the P/E window end. In one implementation, the values of α and β are 0.8 and 0.6, respectively, although different values may be used in other implementations. Although Equation 6 illustrates a linear decrease in the static/dynamic data threshold responsive to the P/E count of the current destination block 402, other approaches may be used, such as equations based on a logarithmic, exponential, or another function of the P/E count of the current destination block 402. Moreover, a comparison between the P/E count of the current destination block 402 and another value (e.g., the P/E window start) can be used to adjust the static/dynamic data threshold in some embodiments.


As illustrated in FIG. 4, in one embodiment, once the static/dynamic data threshold lowers to the adjusted static/dynamic data threshold 408, one or more blocks (e.g., a block 410) in the static block pick area may now be above the adjusted static/dynamic data threshold 408. Such blocks can now qualify as selectable source block(s) for a static wear leveling operation, and the degree to which the P/E count of blocks may exceed the P/E window end can accordingly be controlled or limited.



FIG. 5 is a flow diagram illustrating a process 500 for completing a garbage collection operation according to one embodiment of the invention. In some embodiments, the controller 130 and/or the garbage collection/static wear leveling module 132 are configured to perform the process 500.


At block 505, the process 500 identifies a destination memory unit. For instance, the garbage collection/static wear leveling module 132 can identify a destination block of the non-volatile memory array 140 for a garbage collection operation, such as static wear leveling. The destination block may have a relatively higher P/E count. At block 510, the process 500 identifies one or more potential source memory units from a static memory unit pick area. The static memory unit pick area can, for example, include blocks of the non-volatile memory array 140 that have a relatively lower P/E count.


At block 515, the process 500 determines whether the data age of one or more of the potential source memory units meets a threshold, such as a static/dynamic data threshold. If the data age of one or more of the potential source memory units meets the threshold, at block 520, the process 500 picks one or more of the potential source memory units meeting the threshold for the garbage collection operation. On the other hand, at block 525, if the data age of one or more of the potential source memory units does not meet the threshold, the process 500 picks one or more source memory units using a default approach. For example, the process 500 can select one or more blocks of the non-volatile memory array 140 for the garbage collection operation based on an amount of invalid data stored in the one or more blocks. At block 530, the process 500 performs the garbage collection operation using the identified destination memory unit and the one or more picked source memory units at block 520 or 525.


CONCLUSION

While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the disclosure. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms. Furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the disclosure. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the disclosure. For example, those skilled in the art will appreciate that in various embodiments, the actual steps taken in the process shown in FIG. 5 may differ from those shown in the figure. Depending on the embodiment, certain of the steps described in the example above may be removed, others may be added, and the sequence of steps may be altered and/or performed in parallel. Also, the features and attributes of the specific embodiments disclosed above may be combined in different ways to form additional embodiments, all of which fall within the scope of the present disclosure. Although the present disclosure provides certain preferred embodiments and applications, other embodiments that are apparent to those of ordinary skill in the art, including embodiments which do not provide all of the features and advantages set forth herein, are also within the scope of this disclosure. Accordingly, the scope of the present disclosure is intended to be defined only by reference to the appended claims.

Claims
  • 1. A data storage system comprising: a non-volatile solid-state memory array comprising memory units; anda controller configured to: identify a destination memory unit of the memory units, wherein a number of program-erase (P/E) operations performed on the destination memory unit exceeds a first P/E threshold,determine a data age threshold that is lower than an initial data age threshold, wherein the determining is based on reducing the initial data age threshold by a scaling factor, which is based on the number of P/E operations performed on the destination memory unit and on the first P/E threshold,select a source memory unit from a set of memory units storing data of the memory units, the source memory unit selected based at least on a data age of the data stored in the source memory unit exceeding the data age threshold separating a first data age range from a second data age range and on the number of P/E operations performed on the source memory unit not exceeding a second P/E threshold, andperform an internal memory operation using the destination memory unit and the source memory unit.
  • 2. The data storage system of claim 1, wherein the internal memory operation comprises a wear leveling operation.
  • 3. The data storage system of claim 1, wherein the second P/E threshold corresponds to the number of P/E operations performed on a memory unit that has been subjected to a lowest number of P/E operations, wherein the first P/E threshold corresponds to the sum of the second P/E threshold and a P/E window size, and wherein the scaling factor includes a numerator comprising the number of P/E operations performed on the destination memory unit subtracted by the first P/E threshold, the numerator being subtracted by the P/E window size, and a denominator comprising the P/E window size.
  • 4. The data storage system of claim 1, wherein the controller is further configured to perform the internal memory operation using the destination memory unit and a memory unit selected based at least on the amount of invalid data stored in the memory unit when the data age of the data stored in each of the set of memory units is below the data age threshold, and wherein the internal memory operation comprises a garbage collection operation.
  • 5. The data storage system of claim 1, wherein the controller is further configured to: maintain a counter indicative of a time duration that the controller is turned on; anddetermine the data age of the data stored in the memory units based at least on a comparison between a first counter value when the data age is determined and a second counter value when the data is written to the memory units.
  • 6. The data storage system of claim 1, wherein the controller further is configured to: assign timestamps to the data written to the memory units indicative of when the data is written to the memory units; anddetermine the data age of the data stored in the memory units based at least on the timestamps.
  • 7. The data storage system of claim 6, wherein the controller is further configured to: perform a garbage collection operation including writing the data stored in two or more memory units to a single memory unit; andassign a single timestamp to the data written to the single memory unit by combining the timestamps assigned to the data stored in the two or more memory units.
  • 8. The data storage system of claim 7, wherein the controller is further configured to combine the timestamps using a weighted average.
  • 9. The data storage system of claim 6, wherein the controller is further configured to normalize the timestamps relative to a maximum data age of the data stored in the memory units or a value proportional to a write time for filling the memory units with data.
  • 10. The data storage system of claim 1, wherein the controller is further configured to select as the source memory unit a memory unit that stores the data having an oldest data age of the data stored in the set of memory units.
  • 11. The data storage system of claim 1, wherein the data comprises data received from a host system.
  • 12. In a data storage system comprising a controller and a non-volatile solid-state memory array including memory units, a method comprising: identifying a destination memory unit of the memory units, wherein a number of program-erase (P/E) operations performed on the destination memory unit exceeds a first P/E threshold;determining a data age threshold that is lower than an initial data age threshold, wherein the determining is based on reducing the initial data age threshold by a scaling factor, which is based on the number of P/E operations performed on the destination memory unit and on the first P/E threshold;selecting a source memory unit from a set of memory units storing data of the memory units, the source memory unit selected based at least on a data age of the data stored in the source memory unit exceeding the data age threshold separating a first data age range from a second data age range and on the number of P/E operations performed on the source memory unit not exceeding a second P/E threshold which is below the first P/E threshold; andperforming an internal memory operation using the destination memory unit and the source memory unit.
  • 13. The method of claim 12, wherein the internal memory operation comprises a wear leveling operation.
  • 14. The method of claim 12, wherein the second P/E threshold corresponds to the number of P/E operations performed on a memory unit that has been subjected to a lowest number of P/E operations, wherein the first P/E threshold corresponds to the sum of the second P/E threshold and a P/E window size, and wherein the scaling factor includes a numerator comprising the number of P/E operations performed on the destination memory unit subtracted by the first P/E threshold, the numerator being subtracted by the P/E window size, and a denominator comprising the P/E window size.
  • 15. The method of claim 12, further comprising performing the internal memory operation using the destination memory unit and a memory unit selected based at least on the amount of invalid data stored in the memory unit when the data age of the data stored in each of the set of memory units is below the data age threshold, and wherein the internal memory operation comprises a garbage collection operation.
  • 16. The method of claim 12, further comprising: maintaining a counter indicative of a time duration that the controller is turned on; anddetermining the data age of the data stored in the memory units based at least on a comparison between a first counter value when the data age is determined and a second counter value when the data is written to the memory units.
  • 17. The method of claim 12, further comprising: assigning timestamps to the data written to the memory units indicative of when the data is written to the memory units; anddetermining the data age of the data stored in the memory units based at least on the timestamps.
  • 18. The method of claim 17, further comprising: performing a garbage collection operation including writing the data stored in two or more memory units to a single memory unit; andassigning a single timestamp to the data written to the single memory unit by combining the timestamps assigned to the data stored in the two or more memory units.
  • 19. The method of claim 18, wherein said combining comprises combining the timestamps using a weighted average.
  • 20. The method of claim 17, further comprising normalizing the timestamps relative to a maximum data age of the data stored in the memory units or a value proportional to a write time for filling the memory units with data.
  • 21. The method of claim 12, wherein said selecting comprising selecting as the source memory unit a memory unit that stores the data having an oldest data age of the data stored in the set of memory units.
  • 22. The method of claim 12, wherein the data comprises data received from a host system.
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims benefit under 35 U.S.C. § 119(e) to U.S. Provisional Patent Application No. 61/824,137 entitled “BLOCK SELECTION FOR GARBAGE COLLECTION OPERATIONS IN A SOLID-STATE DATA STORAGE DEVICE” filed on May 16, 2013, and U.S. Provisional Patent Application No. 61/824,001 entitled “STATIC WEAR LEVELING IN A SOLID-STATE DATA STORAGE DEVICE” filed on May 16, 2013; the disclosures of which are hereby incorporated by reference in their entirety.

US Referenced Citations (127)
Number Name Date Kind
6016275 Han Jan 2000 A
6230233 Lofgren et al. May 2001 B1
6594183 Lofgren et al. Jul 2003 B1
6850443 Lofgren et al. Feb 2005 B2
6856556 Hajeck Feb 2005 B1
7126857 Hajeck Oct 2006 B2
7353325 Lofgren et al. Apr 2008 B2
7430136 Merry, Jr. et al. Sep 2008 B2
7447807 Merry et al. Nov 2008 B1
7502256 Merry, Jr. et al. Mar 2009 B2
7509441 Merry et al. Mar 2009 B1
7596643 Merry, Jr. et al. Sep 2009 B2
7653778 Merry, Jr. et al. Jan 2010 B2
7685337 Merry, Jr. et al. Mar 2010 B2
7685338 Merry, Jr. et al. Mar 2010 B2
7685374 Diggs et al. Mar 2010 B2
7733712 Walston et al. Jun 2010 B1
7765373 Merry et al. Jul 2010 B1
7898855 Merry, Jr. et al. Mar 2011 B2
7912991 Merry et al. Mar 2011 B1
7936603 Merry, Jr. et al. May 2011 B2
7962792 Diggs et al. Jun 2011 B2
8078918 Diggs et al. Dec 2011 B2
8090899 Syu Jan 2012 B1
8095851 Diggs et al. Jan 2012 B2
8108692 Merry et al. Jan 2012 B1
8122185 Merry, Jr. et al. Feb 2012 B2
8127048 Merry et al. Feb 2012 B1
8135903 Kan Mar 2012 B1
8151020 Merry, Jr. et al. Apr 2012 B2
8161227 Diggs et al. Apr 2012 B1
8166245 Diggs et al. Apr 2012 B2
8243525 Kan Aug 2012 B1
8254172 Kan Aug 2012 B1
8261012 Kan Sep 2012 B2
8296625 Diggs et al. Oct 2012 B2
8312207 Merry, Jr. et al. Nov 2012 B2
8316176 Phan et al. Nov 2012 B1
8341339 Boyle et al. Dec 2012 B1
8375151 Kan Feb 2013 B1
8380946 Hetzler et al. Feb 2013 B2
8386700 Olbrich et al. Feb 2013 B2
8392635 Booth et al. Mar 2013 B2
8397107 Syu et al. Mar 2013 B1
8407449 Colon et al. Mar 2013 B1
8423722 Deforest et al. Apr 2013 B1
8433858 Diggs et al. Apr 2013 B1
8443167 Fallone et al. May 2013 B1
8447920 Syu May 2013 B1
8458435 Rainey, III et al. Jun 2013 B1
8478930 Syu Jul 2013 B1
8489854 Colon et al. Jul 2013 B1
8503237 Horn Aug 2013 B1
8521972 Boyle et al. Aug 2013 B1
8549236 Diggs et al. Oct 2013 B2
8583835 Kan Nov 2013 B1
8601311 Horn Dec 2013 B2
8601313 Horn Dec 2013 B1
8612669 Syu et al. Dec 2013 B1
8612804 Kang et al. Dec 2013 B1
8615681 Horn Dec 2013 B2
8638602 Horn Jan 2014 B1
8639872 Boyle et al. Jan 2014 B1
8683113 Abasto et al. Mar 2014 B2
8700834 Horn et al. Apr 2014 B2
8700950 Syu Apr 2014 B1
8700951 Call et al. Apr 2014 B1
8706985 Boyle et al. Apr 2014 B1
8707104 Jean Apr 2014 B1
8713066 Lo et al. Apr 2014 B1
8713357 Jean et al. Apr 2014 B1
8719531 Strange et al. May 2014 B2
8724422 Agness et al. May 2014 B1
8725931 Kang May 2014 B1
8745277 Kan Jun 2014 B2
8751728 Syu et al. Jun 2014 B1
8769190 Syu et al. Jul 2014 B1
8769232 Suryabudi et al. Jul 2014 B2
8775720 Meyer et al. Jul 2014 B1
8782327 Kang et al. Jul 2014 B1
8788778 Boyle Jul 2014 B1
8788779 Horn Jul 2014 B1
8788880 Gosla et al. Jul 2014 B1
8793429 Call et al. Jul 2014 B1
9652376 Kuzmin May 2017 B2
20070030734 Sinclair et al. Feb 2007 A1
20080147998 Jeong Jun 2008 A1
20080239811 Tanaka Oct 2008 A1
20090077429 Yim et al. Mar 2009 A1
20090089485 Yeh Apr 2009 A1
20090091978 Yeh Apr 2009 A1
20090113112 Ye et al. Apr 2009 A1
20090157952 Kim et al. Jun 2009 A1
20090172258 Olbrich et al. Jul 2009 A1
20090182936 Lee Jul 2009 A1
20090216936 Chu et al. Aug 2009 A1
20090240873 Yu et al. Sep 2009 A1
20100174849 Walston et al. Jul 2010 A1
20100250793 Syu Sep 2010 A1
20100287328 Feldman et al. Nov 2010 A1
20110099323 Syu Apr 2011 A1
20110191521 Araki et al. Aug 2011 A1
20110225346 Goss et al. Sep 2011 A1
20110283049 Kang et al. Nov 2011 A1
20120023144 Rub Jan 2012 A1
20120066438 Yoon et al. Mar 2012 A1
20120072654 Olbrich et al. Mar 2012 A1
20120260020 Suryabudi et al. Oct 2012 A1
20120278531 Horn Nov 2012 A1
20120284460 Guda Nov 2012 A1
20120324191 Strange et al. Dec 2012 A1
20130073788 Post et al. Mar 2013 A1
20130132638 Horn et al. May 2013 A1
20130145106 Kan Jun 2013 A1
20130159648 Anglin Jun 2013 A1
20130173844 Chen Jul 2013 A1
20130290793 Booth et al. Oct 2013 A1
20130311705 Cheng Nov 2013 A1
20140059405 Syu et al. Feb 2014 A1
20140101369 Tomlin et al. Apr 2014 A1
20140115427 Lu Apr 2014 A1
20140133220 Danilak et al. May 2014 A1
20140136753 Tomlin et al. May 2014 A1
20140149826 Lu et al. May 2014 A1
20140157078 Danilak et al. Jun 2014 A1
20140181432 Horn Jun 2014 A1
20140223255 Lu et al. Aug 2014 A1
Provisional Applications (2)
Number Date Country
61824137 May 2013 US
61824001 May 2013 US