This Application claims priority of Taiwan Patent Application No. 112139162, filed on Oct. 13, 2023, the entirety of which is incorporated by reference herein.
The present invention relates to control technology for a data storage device.
There are various forms of non-volatile memory (NVM) used for long-term data storage, such as flash memory, magnetoresistive random access memory (magnetoresistive RAM), ferroelectric RAM, resistive RAM, spin transfer torque-RAM (STT-RAM), and so on. These types of non-volatile memory may be used as the storage medium in a data storage device.
In this technical field, how to efficiently control non-volatile memory is an important issue.
In this disclosure, instructions are cached on the device side to be scheduled for execution. The proposed data storage device has a non-volatile memory, and a controller coupled to the non-volatile memory. The controller is configured to operate the non-volatile memory in response to requests from the host side. The controller has an instruction cache operative to cache instructions issued from the host side. Based on information carried by the instructions, the controller schedules and executes the instructions to operate the non-volatile memory. The information carried by each instruction includes a namespace identifier and/or a function identifier. The different information combinations correspond to the different priority levels or processing bandwidths for the controller to schedule execution of the instructions.
The above concepts are also used to implement the control method of non-volatile memory.
A detailed description is given in the following embodiments with reference to the accompanying drawings.
The present invention can be more fully understood by reading the subsequent detailed description and examples with references made to the accompanying drawings, wherein:
The following description enumerates various embodiments of the disclosure, but is not intended to be limited thereto. The actual scope of the disclosure should be defined according to the claims. The various blocks/modules mentioned below may be implemented by a combination of hardware, software, and firmware, and may also be implemented by special circuits. The various blocks/modules are not limited to being implemented separately, but can also be combined together to share certain functions.
A non-volatile memory for long-term data retention may be a flash memory, a magnetoresistive random access memory (magnetoresistive RAM), a ferroelectric RAM, a resistive RAM, a spin transfer torque-RAM (STT-RAM) and so on. The following discussion uses flash memory as an example, but is not limited thereto. The proposed technology may be applied to the other types of non-volatile memory.
Today's data storage devices often use flash memory as the storage medium for storing user data from the host. There are many types of data storage devices, including memory cards, universal serial bus (USB) flash devices, solid-state drives (SSDs), and so on. In another exemplary embodiment, a flash memory may be packaged with a controller to form a multiple-chip package called eMMC (embedded multimedia card).
A data storage device using a flash memory as a storage medium can be applied in a variety of electronic devices, including a smartphone, a wearable device, a tablet computer, a virtual reality device, etc. A processor of an electronic device may be regarded as a host that operates the data storage device equipped on the electronic device to access the flash memory within the data storage device.
A data center may be built with data storage devices using flash memories as the storage medium. For example, a server may operate an array of SSDs to form a data center. The server may be regarded as a host that operates the SSDs to access the flash memories within the SSDs.
The vehicle-mounted device may also use a flash memory for data storage. Various sensors in the vehicle system each may be regarded as a host end, which has a need to access the flash memory.
A flash memory has its special storage characteristics. The host indicates logical addresses (for example, logical block address LBA or global host page number GHP, etc.) to issue read or write requests to the flash memory. The logical addresses need to be mapped to physical addresses of the physical space of the flash memory. The paper involves a namespace technology. Each namespace corresponds to a set of logical addresses and is only accessible to the particular software. Each namespace has its corresponding namespace identifier (NSID). A flash memory access instruction issued by the host may be accompanied with a namespace identifier (NSID) which will be recognized by the device controller.
In some exemplary embodiments, multiple host ends may share the same data storage device; for example, an in-vehicle system. A single root I/O virtualization (SRIOV) technology is proposed for high-speed peripheral component interconnection (PCIe) expansion. Referring to a virtual machine (VM) technology, a data storage device connected to a PCIe slot may be accessed via a physical channel or expanded virtual channels. The physical channel relates to a physical function, and the virtual channels relate to virtual functions (VFs). Each function corresponds to one virtual function (VF) identifier, which is abbreviated as VFID. For example, the physical function may be represented by a physical function identifier VFID0, and the virtual functions are numbered VFID1, VFID2, and so on. The flash memory access instruction issued by the host may be accompanied with its corresponding function identifier (VFID) to be recognized by the device controller.
In this disclosure, the instructions are cached on the device side to be scheduled and executes based on the information (such as the aforementioned namespace identifier NSID and/or function identifier VFID) carried by the instructions and, accordingly, the non-volatile memory operates.
First, an instruction with namespace identifier (NSID) attached thereto is discussed as an example.
In an exemplary embodiment, the different namespace identifiers (NSIDs) correspond to the different priority levels. The controller 104 on the device side preferentially executes the instructions with the higher priority NSIDs in the instruction cache 100, and then executes the instructions with the lower priority NSIDs in the instruction cache 100.
Referring to
If the functional block 208 determines that the namespace identifier NSID_A is not assigned with the higher priority, it means that the namespace identifier NSID_B relates to the higher priority. The device operates the functional block 218 to determine whether the processing instruction relates to the namespace identifier NSID_B. If so, the device executes the instruction and operates the functional block 218 again to determine whether the next instruction also relates to the namespace identifier NSID_B. The device can operate functional block 220 to determine whether there is any instruction with namespace identifier NSID_B cached in the other space of the instruction cache 110. If so, the device operates the functional block 222 to fetch a new instruction with namespace identifier NSID_B from the instruction cache 110, and returns to the former steps to execute the fetched instruction according to its execution priority. If the instruction cache 110 does not cache any instruction with namespace identifier NSID_B, the device operates the functional block 224 to determine whether it is processing an instruction with namespace identifier NSID_A. If so, the device executes the instruction and operates the functional block 220 to check the instruction cache 110 to ensure that the instruction with the namespace identifier NSID_B is executed in the higher priority. If the functional block 224 determines that neither instruction with namespace identifier NSID_B nor instruction with namespace identifier NSID_A can be processed, the device operates the functional block 222 to fetch a new instruction from the instruction cache 110, then returns to the former steps to schedule its execution according to its execution priority.
Referring to
Compared with the single host end 202 in
In an exemplary embodiment, instructions with the different namespace identifiers (NSIDs) correspond to the different processing bandwidths.
During the first time interval T1, the controller 104 prioritizes the execution of instructions cached in the instruction cache 110 and related to a first namespace identifier. During the second time interval T2, the controller 104 prioritizes the execution of instructions cached in the instruction cache 110 and related to a second namespace identifier. In an exemplary embodiment, the controller 104 on the device side divides an execution cycle into the first time interval T1 and the second time interval T2 which have the different lengths. Alternatively, two time intervals planned in a way irrelevant to dividing an execution cycle may also be used as the first time interval T1 and the second time interval T2.
Referring to
Referring to
Compared with the single host end 202 in
In some situations, there is no instruction with namespace identifier NSID_A although the first time interval T1 has not expired. At this moment, the controller 104 is allowed to execute the instructions with namespace identifier NSID_B. Similarly, if the second time interval T2 has not expired, but there is no instruction with namespace identifier NSID_B to be executed, the controller 104 can execute the instruction with namespace identifier NSID_A.
In an exemplary embodiment, the device-side controller 104 monitors the number of executed instructions to adjust the processing bandwidth. The controller 104 is configured to make the namespace identifier NSID_A to correspond to an instruction amount N1, and to make the namespace identifier NSID_B to correspond to an instruction amount N2, where N1 #N2. The controller 104 is scheduled to execute N1 NSID_A instructions and then to execute N2 NSID_B instructions. The N1: N2 cycles are repeated. If N1 is greater than N2, the instructions with namespace identifier NSID_A gains the greater processing bandwidth than the instructions with namespace identifier NSID_B.
In an exemplary embodiment, the device-side controller 104 controls the bottom limit of the remained cached instructions to control the processing bandwidth. The controller 104 is configured to make the namespace identifier NSID_A to correspond to a bottom limit TH1, and the namespace identifier NSID_B to correspond to a bottom limit TH2, where TH1≠TH2. The controller 104 executes the instructions with namespace identifier NSID_A till the remained cached instructions with namespace identifier NSID_A drops to the bottom limit TH1, and then changes to execute the instructions with namespace identifier NSID_B. The execution of the instructions with namespace identifier NSID_B continues till the remained cached instructions with namespace identifier NSID_B drops to the bottom limit TH2, and then the controller 104 changes to execute the instructions with namespace identifier NSID_A, and cycles like this. If TH1 is smaller than TH2, the instructions with namespace identifier NSID_A gains the greater processing bandwidth than the instructions with namespace identifier NSID_B.
The following discusses examples wherein function identifiers (VFID) are used as the information attached to the instructions. Such embodiments are SRIOV applications.
In an exemplary embodiment, the instructions with the different VFIDs are at the different priority levels. The controller 104 on the device side preferentially executes the instructions cached in the instruction cache 110 and related to the higher-priority VFID, and then executes the instructions cached in the instruction cache 110 and related to the lower-priority VFID.
Referring to
If the functional block 604 determines that the physical function (VFID0) is not granted the higher priority, it means that the virtual function (VFID1) corresponds to the higher priority. At the device side, the functional block 614 determines whether the processing instruction is a virtual function instruction with VFID1. If so, the device executes the instruction and operates the functional block 614 again to determine whether the next instruction is also a virtual function instruction with VFID1. Otherwise, the device may operate the functional block 616 to determine whether any virtual function instruction with VFID1 is cached in the remaining space of the instruction cache 110. If so, the device operates the functional block 618 to fetch a new virtual function instruction with VFID1 from the instruction cache 110, and returns to the former steps to execute it according to its priority level. If the instruction cache 110 currently does not have any virtual function instruction with VFID1, the device operates the functional block 620 to determine whether the processing instruction is a physical function instruction with VFID0. If so, the device executes the instruction and operates the functional block 616 to check the instruction cache 110 to ensure that the virtual function instructions with VFID1 take priority over the physical function instructions with VFID0. If the functional block 620 determines that neither physical function instructions with VFID0 nor virtual function instructions with VFID1 can be processed, the device operates the functional block 618 to fetch a new instruction from the instruction cache 110, then returns to the former steps to schedule its execution according to its priority level.
Referring to
In an exemplary embodiment, instructions with the different VFIDs correspond to the different processing bandwidths.
During the first time interval T1, the controller 104 prioritizes executing instructions cached in the instruction cache 110 and related to a first function identifier. During the second time interval T2, the controller 104 prioritizes executing instructions cached in the instruction cache 110 and related to a second function identifier. In an exemplary embodiment, the controller 104 on the device side divides an execution cycle into the first time interval T1 and the second time interval T2 which have the different lengths. Alternatively, two time intervals planned in a way irrelevant to dividing an execution cycle may also be used as the first time interval T1 and the second time interval T2.
Compared with the examples of
In some situations, there are no instructions with function identifier VFID0 although the first time interval T1 has not expired. At this moment, the controller 104 is allowed to execute the instructions with function identifier VFID1. Similarly, if the second time interval T2 has not expired, but there are no instructions with function identifier VFID1 to be executed, the controller 104 can execute the instruction with function identifier VFID0.
In an exemplary embodiment, the device-side controller 104 monitors the number of executed instructions to adjust the processing bandwidth. The controller 104 is configured to make the function identifier VFID0 to correspond to an instruction amount N1, and to make the function identifier VFID1 to correspond to an instruction amount N2, where N1 #N2. The controller 104 is scheduled to execute N1 VFID0 instructions and then to execute N2 VFID1 instructions. The N1: N2 cycles are repeated. If N1 is greater than N2, the instructions with function identifier VFID0 gains the greater processing bandwidth than the instructions with function identifier VFID1.
In an exemplary embodiment, the device-side controller 104 controls the bottom limit of the remained cached instructions to control the processing bandwidth. The controller 104 is configured to make the function identifier VFID0 to correspond to a bottom limit TH1, and the function identifier VFID1 to correspond to a bottom limit TH2, where TH1≠TH2. The controller 104 executes the instructions with function identifier VFID0 till the remained cached instructions with function identifier VFID0 drops to the bottom limit TH1, and then changes to execute the instructions with function identifier VFID1. The execution of the instructions with function identifier VFID1 continues till the remained cached instructions with function identifier VFID1 drops to the bottom limit TH2, and then the controller 104 changes to execute the instructions with function identifier VFID0, and cycles like this. If TH1 is smaller than TH2, the instructions with function identifier VFID0 gains the greater processing bandwidth than the instructions with function identifier VFID1.
In other exemplary embodiments, the priority level/processing bandwidth plan is not limited to schedule a physical function and a virtual function. The different priority levels/processing bandwidths may be assigned to the different virtual functions. For example, the first virtual function may correspond to a priority level/processing bandwidth different from that of the second virtual function.
The above concept may be used to implement a non-volatile memory control method, including: caching a plurality of instructions which are issued from the host side to operate a non-volatile memory; and, scheduling and executing the instructions according to information attached to the instructions, to operate the non-volatile memory accordingly. These instructions are used to operate the non-volatile memory. This information may be a namespace identifier (NSID) or/and a function identifier (VFID). Different combinations of these identifiers may correspond to the different priority levels or processing bandwidths for command scheduling.
While the invention has been described by way of example and in terms of the preferred embodiments, it should be understood that the invention is not limited to the disclosed embodiments. On the contrary, it is intended to cover various modifications and similar arrangements (as would be apparent to those skilled in the art). Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.
Number | Date | Country | Kind |
---|---|---|---|
112139162 | Oct 2023 | TW | national |