Embodiments of the disclosure relate generally to memory sub-systems, and more specifically, relate to a slow charge loss (SCL) monitor for power up performance boosting.
A memory sub-system can include one or more memory devices that store data. The memory devices can be, for example, non-volatile memory devices and volatile memory devices. In general, a host system can utilize a memory sub-system to store data at the memory devices and to retrieve data from the memory devices.
The disclosure will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the disclosure. The drawings, however, should not be taken to limit the disclosure to the specific embodiments, but are for explanation and understanding only.
Aspects of the present disclosure are directed to a slow charge loss (SCL) monitor for power up performance boosting. A memory sub-system can be a storage device, a memory module, or a combination of a storage device and memory module. Examples of storage devices and memory modules are described below in conjunction with
A memory sub-system can include high density non-volatile memory devices where retention of data is desired when no power is supplied to the memory device. One example of non-volatile memory devices is a NOT-AND (NAND) memory device. Other examples of non-volatile memory devices are described below in conjunction with
The memory sub-system can perform host-initiated memory access operations. For example, the host system can initiate a data operation (e.g., write, read, erase, etc.) on a memory sub-system. The host system can send access requests (e.g., write command or read command) to the memory sub-system, such as to store data on a memory device at the memory sub-system and to read data from the memory device on the memory sub-system. The data to be read or written, as specified by a host request, is hereinafter referred to as “host data.” A host request can include logical address information (e.g., logical block address (LBA), namespace) for the host data, which is the location the host system associates with the host data. The logical address information (e.g., LBA, namespace) can be part of metadata for the host data. Metadata can also include error handling data (e.g., ECC codeword, parity code), data version (e.g. used to distinguish age of data written), valid bitmap (which LBAs or logical transfer units contain valid data), etc.
A memory device includes multiple memory cells, each of which can store, depending on the memory cell type, one or more bits of information. A memory cell can be programmed (written to) by applying a certain voltage to the memory cell, which results in an electric charge being held by the memory cell, thus allowing modulation of the voltage distributions produced by the memory cell. Moreover, precisely controlling the amount of the electric charge stored by the memory cell allows the cell to establish multiple threshold voltage levels corresponding to different logical levels, thus effectively allowing a single memory cell to store multiple bits of information: a memory cell operated with 2n different threshold voltage levels is capable of storing n bits of information. Thus, the read operation can be performed by comparing the measured voltage exhibited by the memory cell to one or more reference voltage levels in order to distinguish between two logical levels for single-level cells and between multiple logical levels for multi-level cells. Thus, a read operation can be performed by comparing the measured voltage exhibited by the memory cell to one or more reference voltage levels in order to distinguish between two logical levels for single-level cells and between multiple logical levels for multi-level cells.
Due to the phenomenon known as slow charge loss (SCL), the threshold voltage (VT) of a memory cell changes in time as the electric charge of the cell is degrading (e.g., voltage shifts). The threshold voltage shift from SCL can be referred to as “temporal voltage shift,” since the degrading electric charge causes the voltage distributions to shift along the voltage axis towards lower voltage levels. The threshold voltage changes rapidly at first (e.g., immediately after the memory cell was programmed), and then slows down in an approximately logarithmic linear fashion with respect to the time elapsed since the cell programming event. Accordingly, failure to mitigate the temporal voltage shift caused by the slow charge loss can result in an increased bit error rate in read operations.
A memory sub-system can mitigate the temporal voltage shift (TVS) by employing block family-based error avoidance (BFEA) strategies. BFEA tracks the TVS to keep each block (or pages within a block) calibrated well enough to have an acceptable bit error rate (BER). “Calibration” herein shall refer to adjusting a read level value (for example, by adjusting a read level offset or read level base value) to perform read operations within the acceptable BER. The TVS is tracked for selective programmed blocks grouped by block families, and appropriate voltage offsets-which are based on block affiliation with a certain block family—are applied to the base read levels in order to perform read operations. “Block family” herein shall refer to a set of blocks that have been programmed within a specified time window and a specified temperature window. Since the time elapsed after programming and temperature are the main factors affecting the TVS, all blocks and/or partitions within a single block family are presumed to exhibit similar distributions of threshold voltages in memory cells, and thus would require the same voltage offsets to be applied to the base read levels for memory operations. “Base read level” herein shall refer to the initial threshold voltage level exhibited by the memory cell immediately after programming. In some implementations, base read levels can be stored in the metadata of the memory device.
Block families can be created asynchronously with respect to block programming events. In an illustrative example, a new block family can be created whenever a specified period of time (e.g., a predetermined number of minutes) has elapsed since creation of the last block family, and/or the reference temperature of memory cells has changed by more than a specified threshold value. The memory sub-system controller can maintain an identifier of the active block family, which is associated with one or more blocks as they are being programmed.
The memory sub-system controller can periodically perform a calibration process (also referred to as a calibration scan) in order to evaluate a data state metric (e.g., a bit error rate, “BER”) and associate each block family with a predefined time after programming (TAP) bin. Each TAP bin can correspond to a specific voltage offset to be applied for read operations. The bins can be numbered from 0 to 7 (e.g., bin 0-bin 7), and each bin can be associated with a voltage offset to be applied to a base read level for read operations. The associations of block families with TAP bins (e.g., bins 0-7) can be stored in respective metadata tables (e.g., such as a BFEA table) maintained by the memory sub-system controller. In some embodiments, other numbers of bins are considered, such as for example, bins 0-3, 0-15, 0-31, etc. “BFEA table” herein shall refer to a table that reflects the aggregated TVS for all groups of memory cells (e.g., blocks) of the memory sub-system. The BFEA table for a memory sub-system can be stored by the memory sub-system controller as metadata, a reference table, or directly on a memory device. The BFEA table can store TAP bin pointers assigned to block families of the memory device. In some embodiments, the memory sub-system controller can maintain a BFEA table for each memory device.
Due to processing and/or operating conditions, VT can vary for different cells implemented on the same die. The VT of cells in a memory device can be characterized by a distribution P of the threshold voltages P(Q, VT)=dW/dVT, where dW represents the probability that any given cell has its threshold voltage within the interval [VT, VT+dVT] when charge Q is placed on the cell. A memory device can exhibit threshold voltage distributions P(Q,VT) that are narrow compared with the working range of control voltages tolerated by the cells of the device. Multiple non-overlapping P(Qk, VT) (“valleys”) can be fit into the working range, thus allowing for storage and reliable detection of multiple values of the charge Qk, k=1,2,3 . . . etc. The distributions (valleys) can be interspersed with voltage intervals (“valley margins”). A valley margin hereinafter can be used to refer to a voltage, or set of voltages that do not correspond to a VT of a cell or cell level (e.g., the “voltage gaps” between levels such as between L0 and L1, between L1 and L2, etc.). Valley margins can be used to separate various charge states Qk (e.g., levels). The logical state of the cell can be determined by detecting during a memory operation which valley margin is directly below a cell or level VT, and which valley margin is directly above a cell or level VT (e.g., by detecting which valley margins satisfy Valley-Margin1<VT<Valley-Margin2). For example, a read operation can be performed by comparing the measured VT exhibited by the cell to one or more reference voltage levels corresponding to known valley margins (e.g., centers of the valley margins) of the memory device in order to distinguish between multiple logical programming levels and determine the programming state of the cell.
When a memory device is powered-on, the memory sub-system can compensate for the negative effects of TVS. Between memory operation requests from the host (i.e., during the downtime), the memory sub-system controller can perform BFEA scans and calibrations to detect and remedy the effects from TVS. However, when the memory device is powered-off, the memory sub-system cannot perform background BFEA scans and calibrations, and thus cannot compensate for the negative effects of TVS in real time. Additionally, the TVS experienced by a powered-off device can be different than the TVS experienced by a powered-on device under otherwise similar conditions (e.g., similar temperature). Some memory sub-systems fail to adequately address TVS experienced by a powered-off device which can result in high bit error rates (BERs). Memory sub-systems employ various strategies to address TVS.
For example, some memory sub-systems can attempt to compensate for TVS by performing a granular read-level scan on the memory device upon device power on. In a granular read-level scan, the memory sub-system can apply a series of varied read voltages to a set of cells (e.g., a block) to identify read levels that correspond to voltage thresholds (VT
In another example, some memory sub-systems can attempt to compensate for TVS by performing a quick read-level scan on the memory device upon memory device power on. In a quick read-level scan, the memory sub-system can apply a set of pre-programmed read voltages to the set of cells (e.g., the block) to identify read levels that correspond to VT
During either the granular scan or the quick scan, the memory device can be inaccessible to the host. Both the granular scan and the quick scan are performed on each grouping of cells (e.g., such as a set of cells or block). Granular scans can take greater than 10s of milliseconds, up to 10s of seconds or multiple minutes to complete, which can result in increased latency experienced by the user, especially at startup. Quick scans can be completed much quicker, but inaccurate VT determinations can impact memory performance and reliability during operation. Thus, quick scans can result in reduced latency experienced by the user at startup, but also a reduced memory sub-system performance.
Aspects of the present disclosure address the above and other deficiencies by having a memory sub-system that employs an SCL monitor in combination with BFEA strategies to determine a voltage shift that occurred to a subset of cells in the memory sub-system during a power off state, and update a BFEA table based on the determined voltage shift. The voltage shift can be a result of SCL-induced TVS during the power-off state.
Upon detecting a power-off event, the memory sub-system controller can program a known location (e.g., a subset of cells, a page, etc.) of a memory device. The power-off event can be responsive to a request from a host to power-off or enter a standby mode (e.g., a “synchronous” power-off) and/or a power interruption incident (e.g., an “asynchronous” power loss). The memory sub-system controller can program the known location with, for example, a defined set of data, or a random pattern of data. The memory sub-system controller can store a VT value corresponding to the programmed subset of cells before the power-off event occurs.
Upon detecting a power-up event, the memory sub-system can determine a voltage shift by comparing the stored VT value to a new VT value. The memory sub-system can determine the new VT value by performing a series of reads on the subset of memory cells to identify a voltage distribution valley center (e.g., the center of a gap between two voltage distributions corresponding to respective logical levels of a set of memory cells). The valley center can correspond to a read voltage level resulting in the lowest detected BER.
The highest voltage distribution valley can experience a higher voltage shift than correspondingly lower voltage distribution valleys in the subset of cells. Accordingly, in some embodiments, the memory sub-system identifies the valley center of the valley between the highest programmable level of a memory cell and the second highest programmable level of the cell (e.g., the highest voltage distribution valley, such as the valley between level 6 and level 7 of a TLC cell).
Based on the determined voltage shift, the memory sub-system can determine (e.g., using a data structure) an updated corrective read offset for the subset of cells. The data structure can map the voltage shift to a corresponding corrective read offset for a specified subset of cells (e.g., a block family). The memory sub-system controller can then propagate, using the data structure, a set of updated corrective read offsets to other block families in the memory sub-system. In some embodiments, the memory sub-system controller can determine and propagate updated corrective read offsets for all block families in a memory sub-system, including for example, block families residing on separate memory devices (i.e., different memory dies).
The memory sub-system can update a BFEA table of pointers (e.g., TAP bin pointers) to reflect the updated corrective read offset for a block family due to the voltage shift from the duration of the power-off state. A TAP bin pointer for a block family corresponds the block family to a certain TAP bin (e.g., voltage offset bin). Thus, rather than determining an updated corrective read offset for each block family (e.g., various specified subsets of cells), the memory sub-system can use the determined voltage shift to determine the change to the voltage offset bin, and update the BFEA table to reflect the determined voltage offset bin changes.
Advantages of the present disclosure include faster memory access time upon power-up in comparison to granular scans, more accurate VT determinations in comparison to quick scans, a reduction in the amount of data that needs to be read at the power-up time, and improved memory device reliability and consistency. For instance, by determining the voltage shift for a subset of memory cells (rather than for all the memory cells in the memory device) and propagating the corresponding corrective read offset to throughout the memory device, aspects of the represent disclosure result in a reduction in the amount of data to be read upon power up. Performing fewer read options further results in a reduction in the amount of time it takes a memory device to power-up from a powered-off state. The memory sub-system controller can quickly update a group of blocks to a “first-guess” BFEA bin, which can improve read error handling (REH) efficiency.
A memory sub-system 110 can be a storage device, a memory module, or a combination of a storage device and memory module. Examples of a storage device include a solid-state drive (SSD), a flash drive, a universal serial bus (USB) flash drive, an embedded Multi-Media Controller (eMMC) drive, a Universal Flash Storage (UFS) drive, a secure digital (SD) card, and a hard disk drive (HDD). Examples of memory modules include a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), and various types of non-volatile dual in-line memory modules (NVDIMMs).
The computing system 100 can be a computing device such as a desktop computer, laptop computer, network server, mobile device, a vehicle (e.g., airplane, drone, train, automobile, or other conveyance), Internet of Things (IoT) enabled device, embedded computer (e.g., one included in a vehicle, industrial equipment, or a networked commercial device), or such computing device that includes memory and a processing device.
The computing system 100 can include a host system 120 that is coupled to one or more memory sub-systems 110. In some embodiments, the host system 120 is coupled to multiple memory sub-systems 110 of different types.
The host system 120 can include a processor chipset and a software stack executed by the processor chipset. The processor chipset can include one or more cores, one or more caches, a memory controller (e.g., NVDIMM controller), and a storage protocol controller (e.g., PCIe controller, SATA controller). The host system 120 uses the memory sub-system 110, for example, to write data to the memory sub-system 110 and read data from the memory sub-system 110.
The host system 120 can be coupled to the memory sub-system 110 via a physical host interface. Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), a double data rate (DDR) memory bus, Small Computer System Interface (SCSI), a dual in-line memory module (DIMM) interface (e.g., DIMM socket interface that supports Double Data Rate (DDR)), etc. The physical host interface can be used to transmit data between the host system 120 and the memory sub-system 110. The host system 120 can further utilize an NVM Express (NVMe) interface to access components (e.g., memory devices 130) when the memory sub-system 110 is coupled with the host system 120 by the physical host interface (e.g., PCIe bus). The physical host interface can provide an interface for passing control, address, data, and other signals between the memory sub-system 110 and the host system 120.
The memory devices 130, 140 can include any combination of the different types of non-volatile memory devices and/or volatile memory devices. The volatile memory devices (e.g., memory device 140) can be, but are not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM).
Some examples of non-volatile memory devices (e.g., memory device 130) include a not-and (NAND) type flash memory and write-in-place memory, such as a three-dimensional cross-point (“3D cross-point”) memory device, which is a cross-point array of non-volatile memory cells. A cross-point array of non-volatile memory cells can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased. NAND type flash memory includes, for example, two-dimensional NAND (2D NAND) and three-dimensional NAND (3D NAND).
Each of the memory devices 130 can include one or more arrays of memory cells. One type of memory cell, for example, single level cells (SLC) can store one bit per cell. Other types of memory cells, such as multi-level cells (MLCs), triple level cells (TLCs), quad-level cells (QLCs), and penta-level cells (PLCs) can store multiple bits per cell. In some embodiments, each of the memory devices 130 can include one or more arrays of memory cells such as SLCs, MLCs, TLCs, QLCs, PLCs or any combination of such. In some embodiments, a particular memory device can include an SLC portion, and an MLC portion, a TLC portion, a QLC portion, or a PLC portion of memory cells. The memory cells of the memory devices 130 can be grouped as pages that can refer to a logical unit of the memory device used to store data. With some types of memory (e.g., NAND), pages can be grouped to form blocks.
Although non-volatile memory components such as a 3D cross-point array of non-volatile memory cells and NAND type flash memory (e.g., 2D NAND, 3D NAND) are described, the memory device 130 can be based on any other type of non-volatile memory, such as read-only memory (ROM), phase change memory (PCM), self-selecting memory, other chalcogenide based memories, ferroelectric transistor random-access memory (FeTRAM), ferroelectric random access memory (FeRAM), magneto random access memory (MRAM), Spin Transfer Torque (STT)-MRAM, conductive bridging RAM (CBRAM), resistive random access memory (RRAM), oxide based RRAM (OxRAM), not-or (NOR) flash memory, or electrically erasable programmable read-only memory (EEPROM).
A memory sub-system controller 115 (or controller 115 for simplicity) can communicate with the memory devices 130 to perform operations such as reading data, writing data, or erasing data at the memory devices 130 and other such operations. The memory sub-system controller 115 can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. The hardware can include a digital circuitry with dedicated (i.e., hard-coded) logic to perform the operations described herein. The memory sub-system controller 115 can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processor.
The memory sub-system controller 115 can include a processing device, which includes one or more processors (e.g., processor 117), configured to execute instructions stored in a local memory 119. In the illustrated example, the local memory 119 of the memory sub-system controller 115 includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory sub-system 110, including handling communications between the memory sub-system 110 and the host system 120.
In some embodiments, the local memory 119 can include memory registers storing memory pointers, fetched data, etc. The local memory 119 can also include read-only memory (ROM) for storing micro-code. While the example memory sub-system 110 in
In general, the memory sub-system controller 115 can receive commands or operations from the host system 120 and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory devices 130. The memory sub-system controller 115 can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical address (namespace) and a physical address (e.g., physical block address) that are associated with the memory devices 130. The memory sub-system controller 115 can further include host interface circuitry to communicate with the host system 120 via the physical host interface. The host interface circuitry can convert the commands received from the host system into command instructions to access the memory devices 130 as well as convert responses associated with the memory devices 130 into information for the host system 120.
The memory sub-system 110 can also include additional circuitry or components that are not illustrated. In some embodiments, the memory sub-system 110 can include a cache or buffer (e.g., DRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from the memory sub-system controller 115 and decode the address to access the memory devices 130.
In some embodiments, the memory devices 130 include local media controllers 135 that operate in conjunction with memory sub-system controller 115 to execute operations on one or more memory cells of the memory devices 130. An external controller (e.g., memory sub-system controller 115) can externally manage the memory device 130 (e.g., perform media management operations on the memory device 130). In some embodiments, memory sub-system 110 is a managed memory device, which is a raw memory device 130 having control logic (e.g., local media controller 135) on the die and a controller (e.g., memory sub-system controller 115) for media management within the same memory device package. An example of a managed memory device is a managed NAND (MNAND) device.
The memory sub-system 110 includes SCL Monitor Component 113 that can enable and apply an SCL monitor for power-off/standby events. In some embodiments, the memory sub-system controller 115 includes at least a portion of the SCL Monitor Component 113. In some embodiments, the SCL Monitor Component 113 is part of the host system 120, an application, or an operating system. In other embodiments, local media controller 135 includes at least a portion of SCL Monitor Component 113 and is configured to perform the functionality described herein.
The SCL Monitor Component 113 can determine a voltage shift experienced by memory cells of a memory device (e.g., memory device 130 or memory device 140, etc.) during a power-off state of the memory sub-system 110. In some embodiments, the voltage shift can be due to SCL-induced TVS. To accomplish this, SCL Monitor Component 113 can program a subset of cells before power off/standby event. The location of the subset of cells can be stored in a system metadata area. Upon detecting a power-up event, SCL Monitor Component 113 can cause a series of read voltages to be applied to the subset of cells. SCL Monitor Component 113 can identify which read voltage of the series of read voltages has the lowest BER value and can assign the read voltage with the lowest BER as an updated VT value for the subset of cells. SCL Monitor Component 113 can determine the voltage shift of the programmed subset of cells by comparing an initial VT value with the updated VT value for the programmed subset of cells. Further details with regard to determining the voltage shift are described below with reference to
The SCL Monitor Component 113 can use the determined voltage shift of the programmed subset of cells to identify a voltage offset bin shift corresponding to a voltage offset bin associated with a specified subset of cells. SCL Monitor Component 113 can identify the voltage offset bin shift with a data structure (e.g., a binset table). The binset table can include multiple records, each record mapping a value of the determined voltage shift to a corresponding value of the voltage offset bin shift associated with the voltage offset bin corresponding to the specified subset of cells. For a given voltage shift, the voltage offset bin shift for lower bins (e.g., bin 0, bin 1, etc.) can be larger than the voltage offset bin shift for a higher bin (e.g., bin 4, bin 5, etc.). For example, a voltage shift can cause a block family in bin 0 to move to bin 4, and a block family in bin 4 to remain in bin 4.
The SCL Monitor Component 113 can store the relationship between the specified subset of cells (e.g., a block family) and associated voltage offset bin in a data structure (e.g., a BFEA table). The SCL Monitor Component 113 can update the data structure by adding the identified voltage offset bin shift for the specified subset of cells to a stored voltage offset bin associated with the specified subset of cells (e.g., the voltage offset bin associated with the specified subset of cells prior to the power-off state). The relationship between the specified subset of cells and associated voltage offset bin can be a bin pointer (e.g., a TAP bin pointer). In some embodiments, SCL Monitor Component 113 can update the data structure as a part of a calibration operation (e.g., a BFEA calibration operation) to update read level values for various specified subsets of cells. In some embodiments, if SCL Monitor Component 113 determines that the specified subset of cells and another specified subset of cells have sufficiently similar characteristics (e.g., VT
In some embodiments, SCL Monitor Component 113 can use the state of a subset of cells that have been programmed just before a power-off/standby event to determine a duration of the power-off state for the memory sub-system. Upon power-up, SCL Monitor Component 113 can determine a voltage shift in the way described above, by determining the lowest BER value from a set of reads, and comparing the lowest BER value voltage to the previous VT (e.g., a stored VT). SCL Monitor Component 113 can use the voltage shift as an input value to a calibrated table in order to identify the duration of the power-off state. In some embodiments, the calibrated table can be a metadata table stored in memory sub-system controller 115, or another data structure stored on a memory device such as memory device 130. Further details with regards to the operations of the SCL Monitor Component 113 are described below.
Blocks of the memory device are grouped into block families, such as block family “A” 241 and block family “B” 242. A block family can include one or more blocks that have been programmed within a specified time window and/or a specified temperature window. As noted herein above, since the time elapsed after programming and temperature are the main factors affecting the TVS (e.g., the voltage shift), blocks and/or partitions within a single block family (such as block family “A” 241 or block family “B” 242) are presumed to exhibit similar distributions of threshold voltages in memory cells, and thus would require the same voltage offsets for read operations.
Block families can be created asynchronously with respect to block programming events. Over time and/or with changes in temperature, the VT distribution of block family “A” 241 can move closer to the VT distribution of block family “B” 242 currently shown in TAP bin 5. For example, the memory subsystem controller (such as memory sub-system controller 115 as described with respect to
A newly created block family, such as block family “A” 241, can be associated with bin 0. Based on a periodically performed calibration process, the memory sub-system controller (such as memory sub-system controller 115 as described with respect to
In some embodiments, level 7 can be a highest programmable level of a memory cell. In some embodiments, other highest programmable levels are considered (e.g., such as a level 15 “L15”). Valley 7 can be between the highest programmable level of a cell stack and the next highest programmable level of a cell stack (e.g., the valley margin between level 6 and level 7). For example, “V7A” 427A can be between “L6A” 416A and “L7A” 417A in an initial position, and “V7B” 427B can be between “L6B” 416B and “L7A” 417B in a shifted position.
Valley center 428A can be the center of “V7A” 427A (e.g., the center of the valley margin between “L6A” 416A and “L7A” 417A), and valley center 428B can be the center of “V7B” 427B. In some embodiments, voltage shift 430 can be the difference between the peak of “L7A” 417A (e.g., the initial VT threshold) and “L7B” 417B (e.g., the shifted VT threshold). In some embodiments, valley shift 440 can be the difference between valley center 428A in the initial position, and valley center 428B in a shifted position.
Valley shift 440 can correspond to voltage shift 430. Voltage shift can be determined by identifying the position of valley center 428B, and comparing the value of valley center 428B with the value of valley center 428A. Multiple reads of “L6B” 416B and “L7B” 417B can determine valley center 428B by identifying which read of the multiple reads corresponds to the lowest number of detected bits between “L6B” 416B and “L7B” 417B. In some embodiments, valley center 428B can be the read level with the lowest BER between “L6B” 416B and “L7B” 417B.
BFEA table 510 illustrates an example data structure that can be maintained by a memory sub-system controller (such as memory sub-system controller 115 as described with respect to
Update 520 illustrates the portion of the process performed by the SCL Monitor Component (e.g., SCL Monitor Component 113 of
Updated BFEA table 530 illustrates the changes made to BFEA table 510 by the SCL Monitor Component (e.g., SCL Monitor Component 113 as described with reference to
As shown in the illustrative example of binset table 550, a larger voltage shift can cause a block family assigned to voltage offset bin 0 to be reassigned to voltage offset bin 4, while a smaller voltage shift can cause a block family assigned to voltage offset bin 0 to be reassigned to voltage offset bin 2. As described above, a longer duration of a power-off state can result in the memory sub-system experiencing a larger voltage shift upon power-up, and a shorter duration of the power-off state can result in the memory sub-system experiencing a smaller voltage shift upon power-up. For the purposes of this example embodiment, “larger” and “smaller” are used comparative to each other to show that different voltage shifts can have different effects on each voltage offset bin. It should be noted that the values in binset table 550 are for illustrative purposes only, and are not intended as a restriction or limitation.
At operation 610, responsive to detecting a power-off event, processing logic programs, to a predefined logical state, a dummy subset of a plurality of cells. Processing logic can select the dummy subset of the plurality of cells based on various memory operations or conditions. In some embodiments, the dummy subset of the plurality of cells can be a dedicated subset of memory cells of a memory device. In some embodiments, the dummy subset can be a memory page. Processing logic can program a set of data to the dummy subset. In some embodiments, the set of data can be a random data set. In some embodiments, processing logic can identify a time when the dummy subset of the plurality of cells is programmed (e.g., a programming time of the dummy subset of the plurality of cells). In some embodiments, processing logic can identify a programming valley center from the predefined logical state. Processing logic can determine a programming temperature associated with programming the dummy subset of the plurality of cells. The programming temperature can be based on one or more memory device/memory sub-system physical characteristics. The programming temperature can correspond to a temperature of a memory device (e.g., memory die) and/or a temperature of a memory sub-system. In some embodiments, the programming temperature can correspond to a temperature of the dummy subset of the plurality of cells. The memory device and/or memory sub-system can include a temperature probe. In some embodiments, the dummy subset of the plurality of cells can include a dedicated temperature probe. Processing logic can determine a power-up temperature of the memory sub-system. The power-up temperature can be associated with the power-up event. In some embodiments, the power-up temperature can correspond to a time the power-up event occurs. Processing logic can determine a temperature change between the program temperature and the power-up temperature. The voltage offset bin shift can correspond to the temperature change. In some embodiments, a larger temperature change can correspond to a larger change in the voltage offset bin shift (e.g., a larger value of the voltage offset bin shift).
In some embodiments, the power-off event can occur in response to a request sent by the host system 120, such as a shutdown or standby request. In some embodiments, the power-off event can occur in response to a request sent by the memory sub-system controller 115. The power-off event can be a synchronous event (e.g., an expected event, such as in response to a power-off request), or an asynchronous event (e.g., an unexpected event due to unforeseen power loss). Processing logic can determine whether the power-off event was an asynchronous power event. For asynchronous power events, in some embodiments, a capacitor can briefly power the memory sub-system (such as memory sub-system 110 as described with respect to
At operation 620, responsive to detecting a power-up event, processing logic determines a voltage shift associated with the dummy subset of the plurality of cells. In some embodiments, processing logic can determine a duration of a power-off state. Processing logic can use the voltage shift as an input value for a calibrated table to identify an output value of the duration of the power-off state. Processing logic can use the voltage shift as an input value for a pre-calibrated table (e.g., a static table pre-calibrated during production of the memory sub-system) to identify an output value of a voltage offset bin shift for the dummy subset of the plurality of cells. In some embodiments, processing logic can use the voltage shift as an input value for a pre-calibrated table to identify one or more output values of voltage offset bin shifts for block families on a memory device.
The voltage shift associated with the dummy subset of the plurality of cells can correspond to the voltage shift that the dummy subset experienced during a power-off state. A duration of the power-off state can correspond to a voltage offset bin shift. In some embodiments, a longer duration of the power-off state can correspond to a larger change in the voltage offset bin (i.e., a longer duration can correspond to a larger value of the voltage offset bin shift). Processing logic can determine the voltage shift of the dummy subset of the plurality of cells by applying a read voltage to the dummy subset and determining the number of bits above the read voltage and the number of bits below the read voltage. The applied read voltage can correspond to a valley voltage (e.g., valley center) of a memory cell having multiple levels (e.g., a multi-level cell “MLC,” a tri-level cell “TLC,” a quad-level cell “QLC,” etc.). Processing logic can determine the voltage shift of the dummy subset of the plurality of cells by measuring the shift of a valley center (e.g., the center of a valley between two adjacent voltage distributions corresponding to respective logical levels) of the dummy subset of the plurality of cells. In some embodiments, processing logic can determine the valley center of a highest voltage distribution valley of the dummy subset of the plurality of cells. The highest voltage distribution valley of the memory cell can be the voltage gap (e.g., valley margin) between a highest programmable level of the memory cell, and a second highest programmable level of the memory cell as described above with reference to
In some embodiments, potential time savings at operation 620 can increase exponentially for each additional block family. For example, in at least one embodiment, processing logic can read a dummy subset of the plurality of cells to determine the voltage shift for the memory device, and thus can determine the voltage shift nearly 63 times faster for a memory device having 64 block families (e.g., reading and processing the dummy subset can provide the voltage shift information that reading 64 block families also provides, representing a 63× time saver).
At operation 630, processing logic identifies, based on the voltage shift, a voltage offset bin shift corresponding to a voltage offset bin associated with a specified subset of the plurality of cells. The voltage offset bin can be a TAP bin such as “bin 0” 331 as described with reference to
Processing logic can update a data structure (such as BFEA table 510 as described with respect to
In some embodiments, processing logic can map the voltage shift to the voltage offset bin shift with a static table (e.g., binset table), such as binset table 550 as described with to
The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
The example computer system 700 includes a processing device 702, a main memory 704 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or RDRAM, etc.), a static memory 706 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage system 718, which communicate with each other via a bus 730.
Processing device 702 represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 702 can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 702 is configured to execute instructions 726 for performing the operations and steps discussed herein. The computer system 700 can further include a network interface device 708 to communicate over the network 720.
The data storage system 718 can include a machine-readable storage medium 724 (also known as a computer-readable medium) on which is stored one or more sets of instructions 726 or software embodying any one or more of the methodologies or functions described herein. The instructions 726 can also reside, completely or at least partially, within the main memory 704 and/or within the processing device 702 during execution thereof by the computer system 700, the main memory 704 and the processing device 702 also constituting machine-readable storage media. The machine-readable storage medium 724, data storage system 718, and/or main memory 704 can correspond to the memory sub-system 110 of
In one embodiment, the instructions 726 include instructions to implement SCL monitoring functionality corresponding to a SCL Monitor Component (e.g., the SCL Monitor Component 113 of
Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems.
The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the disclosure as described herein.
The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium (e.g., non-transitory computer-readable storage medium) having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory components, etc.
In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.
The present application claims the benefit under 35 U.S.C. § 119(e) of U.S. Provisional Patent Application No. 63/444,498 filed Feb. 9, 2023, which is incorporated by this reference herein.
Number | Date | Country | |
---|---|---|---|
63444498 | Feb 2023 | US |