Embodiments of the disclosure relate generally to storage systems, and more specifically, relate to completion management.
A memory sub-system can include one or more memory devices that store data. The memory devices can be, for example, non-volatile memory devices and volatile memory devices. In general, a host system can utilize a memory sub-system to store data at the memory devices and to retrieve data from the memory devices.
The present disclosure will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the disclosure.
Aspects of the present disclosure are directed to completion management associated with a storage system. An example of a storage system is a solid-state drive (SSD). An SSD can include multiple interface connections to one or more host systems. The interface connections can be referred to as ports. A host system can send protocol-based commands to the SSD via a port. Protocol-based commands received by the SSD can be translated into data commands (e.g., read, write, erase, program, etc.). In general, a host system can utilize a storage system that includes one or more memory components (also hereinafter referred to as “memory devices”). The host system can provide data to be stored at the storage system and can request data to be retrieved from the storage system.
A memory device can be a non-volatile memory device. One example of a non-volatile memory device is a three-dimensional cross-point memory device that includes a cross-point array of non-volatile memory cells. Other examples of non-volatile memory devices are described below in conjunction with
A memory device can be a non-volatile memory device. One example of non-volatile memory devices is a negative-and (NAND) memory device (also known as flash technology). Other examples of non-volatile memory devices are described below in conjunction with
Each of the memory devices can include one or more arrays of memory cells. Depending on the cell type, a cell can store one or more bits of binary information, and has various logic states that correlate to the number of bits being stored. The logic states can be represented by binary values, such as “0” and “1”, or combinations of such values. There are various types of cells, such as single level cells (SLCs), multi-level cells (MLCs), triple level cells (TLCs), and quad-level cells (QLCs). For example, a SLC can store one bit of information and has two logic states.
Some NAND memory devices employ a floating-gate architecture in which memory accesses are controlled based on a relative voltage change between the bit line and the word lines. Other examples of NAND memory devices can employ a replacement-gate architecture that can include the use of word line layouts that can allow for charges corresponding to data values to be trapped within memory cells based on properties of the materials used to construct the word lines.
Transactions or commands, such as read commands or write commands, can be communicated to a storage system from a component or element coupled thereto. When the storage system has processed a transaction or command to completion, the storage system can generate an indication of completion of the transaction or command. Hereinafter, an indication of completion of a transaction or command is referred to as a “completion.” The storage system can communicate a data packet including a completion, such as a completion transaction layer packet (completion TLP), to a component external to the storage system. Hereinafter, such a data packet is referred to as a “completion data packet.” A completion can be placed in a submission queue prior to communicating the completion to the component or element that issued the transaction or command to which the completion corresponds. As used herein, a “submission queue” refers to a memory component of a storage system configured to store a completion and/or a completion data packet prior to communicating the completion and/or completion data packet to the component or element that issued the transaction or command to which the completion corresponds. A non-limiting example of a submission queue can be a register. The register can be a first in, first out (FIFO) register. A storage system can include one or more submission queues. For example, a multi-function storage system can include a submission queue associated with at least one function of the multi-function storage system. A submission queue can be shared by multiple functions of a storage system. Non-limiting examples of functions include physical functions and virtual functions (VFs). A storage system can send and/or receive data via a physical function and/or a VF as described herein.
Although a storage system can include multiple submission queues, a component (e.g., a physical component, such as a host system, and/or a virtual component, such as a virtual machine) to which a completion is to be communicated can include a single completion queue. As used herein, a “completion queue” refers to a memory component of a component coupled to a storage system configured to store a completion and/or a completion data packet received from the storage system. When communicating a completion and/or a completion data packet from a submission queue of a storage system to a completion queue of a component coupled to the storage system, the completion queue can be full (e.g., have insufficient storage capacity) such that the completion queue cannot receive the completion.
Some previous approaches to handling a failed attempt to communicate a completion and/or a completion data packet due to a completion queue being full can include a storage system including firmware configured to maintain an indication of a transaction or command to which the completion corresponds. The storage system can also include an allocation of memory to store the completions until another attempt to communicate the completion and/or the completion data packet is made. The storage system can also include firmware configured to evaluate the status of the completion queue and attempt to communicate the completion again according to a scheduling algorithm. The scheduling algorithm can maintain an order of the completions. For example, a storage system can receive a first transaction from a host system coupled thereto. Subsequently, the storage system can receive a second transaction. Communication of a first completion associated with the first transaction from the storage system to the host system can be unsuccessful. The firmware would have to ensure that the first completion is successfully communicated to the host system prior attempting to communicate a second completion associated with the second transaction because the host system would expect to receive the first completion prior to receiving the second completion. Implementing firmware on a storage system to manage communications of completions as described above can be complex. Implementation of such firmware on a multi-function storage system can be even more complex to management of communication of completions associated with multiple functions.
Aspects of the present disclosure address the above and other deficiencies by including circuitry configured to manage communication of completions and/or completion data packets from the storage system. The circuitry can be configured to store a completion and/or a completion data packet in a local memory of the storage system in response to unsuccessful communication of the completion and/or the completion data packet. Attempting to communicate a completion or a completion data packet can include determining whether a completion queue has sufficient storage for the completion and/or completion data packet. For instance, the circuitry can be configured to store a completion and/or a completion data packet in a local memory of the storage system in response to determining that the completion queue has insufficient storage for the completion and/or completion data packet. As described herein, whether a completion queue has sufficient storage for a completion and/or a completion data packet can be determined using head and tail pointers of the completion queue. In some embodiments, the circuitry can include hardware logic to determine if the completion queue has sufficient storage available for the completion and/or the completion data packet.
The local memory can be a register, such as a FIFO register. Such a FIFO register can be referred to as a “retry FIFO” in reference to the FIFO register being used to retry communication of a completion and/or a completion data packet following an unsuccessful communication of the completion and/or the completion data packet. The circuitry can be configured to communicate the completion and/or the completion data packet from the local memory to the completion queue. The circuitry can be configured to, in response to an unsuccessful communication of the completion and/or the completion data packet from the local memory to the completion queue, store the completion in the local memory. The circuitry can be notified of storage of the completion queue being available for the completion and/or completion data packet via a host notification to a completion queue tail doorbell register. Doorbell registers are discussed further herein. If the completion queue has insufficient storage and the completion and/or completion data packet is stored in the local memory, then the retry mechanism is engaged when the host performs a doorbell ring. This can be indicative of another completion and/or completion data packet being communicated (e.g., removed) from the completion queue.
In some embodiments, the circuitry can include hardware logic to determine an order at which to communicate completions and/or completion data packets from the local memory. For example, the hardware logic can be configured to determine an indication of a transaction to which a completion corresponds to ensure that completions are communicated from the storage system to the completion queue in an order of the transactions to which the completions correspond.
Embodiments of the present disclosure can reduce complexity of firmware of a storage system, particularly of a multi-function storage system. Embodiments of the present disclosure can enable management of completion queues of a host system coupled to a storage system by the storage system rather than by the host system. Unlike firmware, hardware of a storage system can utilize a doorbell signal as a trigger. As used herein, a “doorbell signal” refers to a signal issued when a completion has been transferred to a submission queue. In some embodiments, a doorbell signal can be used as a trigger to attempt communication of a completion and/or determine whether a completion queue is full. Embodiments of the present disclosure can prevent firmware issues when a completion queue is full. For example, firmware can process other transactions or commands when the completion queue is full.
The storage system 116 can include memory devices 112-1 to 112-N (referred to collectively as the memory devices 112). The memory devices 112 can include any combination of the different types of non-volatile memory devices and/or volatile memory devices. Some examples of volatile memory devices include, but are not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM). Some examples of non-volatile memory devices include, but are not limited to, negative-and (NAND) type flash memory and write-in-place memory, such as a three-dimensional cross-point (“3D cross-point”) memory device. A cross-point array of non-volatile memory can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased.
Each of the memory devices 112 can include one or more arrays of memory cells. One type of memory cell, for example, single level cells (SLC) can store one bit per cell. Other types of memory cells, such as multi-level cells (MLCs), triple level cells (TLCs), and quad-level cells (QLCs), can store multiple bits per cell. In some embodiments, each of the memory devices 112 can include one or more arrays of memory cells such as SLCs, MLCs, TLCs, QLCs, or any combination of such. In some embodiments, a particular memory device can include an SLC portion, and an MLC portion, a TLC portion, or a QLC portion of memory cells. The memory cells of the memory devices 112 can be grouped as pages that can refer to a logical unit of the memory device used to store data. With some types of memory (e.g., NAND), pages can be grouped to form blocks.
Although non-volatile memory components such as 3D cross-point are described, the memory device 112 can be based on any other type of non-volatile memory or storage device, such as negative-and (NAND), read-only memory (ROM), phase change memory (PCM), self-selecting memory, other chalcogenide based memories, ferroelectric random access memory (FeRAM), magneto random access memory (MRAM), negative-or (NOR) flash memory, and electrically erasable programmable read-only memory (EEPROM).
The host systems 104 can be coupled to the storage system 116 via a physical host interface. Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), Small Computer System Interface (SCSI), a dual in-line memory module (DIMM) interface (e.g., DIMM socket interface that supports Double Data Rate (DDR)), etc. The physical host interface can be used to transmit data between the host systems 104 and the storage system 116. The host systems 104 can further utilize an NVM Express (NVMe) interface to access the memory devices 112 when the storage system 116 is coupled with the host systems 104 by the PCIe interface. The physical host interface can provide an interface for passing control, address, data, and other signals between the storage system 116 and the host systems 104.
The host systems 104 can send one or more commands (e.g., read, write, erase, program, etc.) to the storage system 116 via one or more ports (e.g., the port 106-1, . . . , the port 106-N) of the storage system 116. The port 106-1, . . . , the port 106-N can be referred to collectively as the ports 106. As used herein, a “port” can be a physical port or a virtual port. A physical port can be a port configured to send and/or receive data via a physical function. A virtual port can be a port configured to send and/or receive data via a virtual function (VF). The ports 106 of the storage device 116 can communicatively couple the storage system 116 to the host systems 104 via a communication link (e.g., a cable, bus, etc.) to establish a respective command path to a respective one of the host systems 104.
The storage system 116 can be capable of pipelined command execution in which multiple commands are executed, in parallel, on the storage system 116. The storage system 116 can have multiple command paths to the memory devices 112. For example, the storage system 116 can be a multi-channel and/or multi-function storage device that includes multiple physical PCIe paths to the memory devices 112. In another example, the storage system 116 can use single root input/output virtualization (SR-IOV) with multiple VFs that act as multiple logical paths to the memory devices 112.
SR-IOV is a specification that enables the efficient sharing of PCIe devices among virtual machines. A single physical PCIe can be shared on a virtual environment using the SR-IOV specification. For example, the storage system 116 can be a PCIe device that is SR-IOV-enabled and can appear as multiple, separate physical devices, each with its own PCIe configuration space. With SR-IOV, a single IO resource, which is referred to as a physical function, can be shared by many VMs. An SR-IOV virtual function is a PCI function that is associated with an SR-IOV physical function. A VF is a lightweight PCIe function that shares one or more physical resources with the physical function and with other VFs that are associated with that physical function. Each SR-IOV device can have a physical function and each physical function can have one or more VFs associated with it. The VFs are created by the physical function.
In some embodiments, the storage system 116 includes multiple physical paths and/or multiple VFs that provide access to a same storage medium (e.g., one or more of the memory devices 112) of the storage system 116. In some embodiments, at least one of the ports 106 can be PCIe ports.
In some embodiments, the computing system 100 supports SR-IOV via one or more of the ports 106. In some embodiments, the ports 106 are physical ports configured to transfer data via a physical function (e.g., a PCIe physical function). In some embodiments, the storage system 116 can include virtual ports 105-1, . . . , 105-N (referred to collectively as the virtual ports 105). The virtual ports 105 can be configured to transfer data via a VF (e.g., a PCIe VF). For instance, a single physical PCIe function (e.g., the port 106-N) can be virtualized to support multiple virtual components (e.g., virtual ports such as the virtual ports 105) that can send and/or receive data via respective VFs (e.g., over one or more physical channels which can each be associated with one or more VFs). In some embodiments, the storage system 116 is a resource of a VM.
The storage system 116 can be configured to support multiple connections to, for example, allow for multiple hosts (e.g., the host systems 104) to be connected to the storage system 116. As an example, in some embodiments, the computing system 100 can be deployed in environments in which one or more of the host systems 104 are located in geophysically disparate locations, such as in a software defined data center deployment. For example, the host system 104-1 can be in one physical location (e.g., in a first geophysical region), the host system 104-N can be in a second physical location (e.g., in a second geophysical region), and/or the storage system 116 can be in a third physical location (e.g., in a third geophysical region).
The storage system 116 can include a system controller 102 to communicate with the memory devices 112 to perform operations such as reading data, writing data, or erasing data at the memory devices 112 and other such operations. The system controller 102 can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. The hardware can include digital circuitry with dedicated (i.e., hard-coded) logic to perform the operations described herein. The system controller 102 can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processor. In general, the system controller 102 can receive commands or operations from any one of the host systems 104 and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory devices 112. The system controller 102 can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical block address and a physical block address that are associated with the memory devices 112.
The storage system 116 can include a completion management component 113. The completion management component 113 can be configured to attempt to communicate a first completion associated with a first transaction processed by the storage system 116 to a component external to the storage system 116. The first completion can be included in a first completion data packet. The completion management component 113 can be configured to, responsive to failure to communicate the first completion, store the first completion in a local memory of the storage system 116 and subsequently attempt a communication of the first completion from the local memory. The completion management component 113 can be further configured to, responsive to failure to communicate the first completion from the local memory, transfer the first completion to the local memory and subsequently attempt to communicate the first completion from the local memory to the component external to the storage system.
The completion management component 113 can be configured to attempt to communicate a second completion associated with a second transaction processed by the storage system 116 to the component external to the storage system or another component external to the storage system. The second completion can be included in a second completion data packet. The completion management component 113 can be configured to, responsive to failure to communicate the second completion, store the second completion in the local memory to the component external to the storage system or the other component external to the storage system and subsequently attempt to communicate the second completion from the local memory to the component external to the storage system or the other component external to the storage system. The completion management component 113 can be configured to maintain an order of processing of the first transaction and the second transaction and attempt communication of the first completion and the second completion from the local memory to the component external to the storage system or the other component external to the storage system according to the order. The first transaction can be processed by the storage system 116 prior to the second transaction. The completion management component 113 can be configured to attempt communication of the first completion from the local memory to the component external to the storage system prior to attempting communication of the second completion from the local memory to the component external to the storage system or the other component external to the storage system.
In some embodiments, the completion management component 113 can be configured to determine whether an initial communication of a completion, associated with a transaction processed by the storage system 116, from the storage system 116 to one or more of the host systems 104 is successful. Communication of the completion can be achieved via communication of a completion data packet including the completion. At least one of the host systems 104 can include a completion queue. The completion management component 113 can be configured to store the completion in a register of the storage system 116 and attempt to communicate the completion from the register to one or more of the host systems 104 in response to determining that the initial communication of the completion was unsuccessful. The register can be a FIFO register. In some embodiments, the completion management component 113 can be configured to determine whether the initial communication of the completion is successful without interaction of firmware of the storage system 116. The completion management component 113 can be configured to attempt to communicate the completion from the register in a round robin manner.
In some embodiments, the storage system 116 can include one or more submission queues and hardware logic circuitry. The completion management component 113 can include the submission queues and/or the hardware logic circuitry. The system controller 102 can include the hardware logic circuitry. The hardware logic circuitry can be configured to retrieve a completion from one of the submission queues and retrieve another completion from another one of the submission queues. The hardware logic circuitry can be configured to responsive to the determination being that the submission queue is full, store the completion in a register of the storage system.
In some embodiments, the hardware logic circuitry can be configured to determine whether the completion queue is full by maintaining a head pointer and a tail pointer of the completion queue maintained in memory of at least one of the host systems 104, updating the tail pointer in response to a completion being transferred to the completion queue, and determining that the completion queue is full in response to a value of the tail pointer being at least the storage capacity of the completion queue. A storage capacity of the register can correspond to a storage capacity of the completion queue such that if the value of the tail pointer is greater than or equal to the storage capacity of the completion queue, then the completion queue is full and cannot store another completion. Because the hardware logic circuity manages communication of completions, after firmware of the storage system 116 communicates a completion to the hardware logic circuitry, the firmware has no further involvement with (e.g., performs no other operations in association with) communication of the completion. Furthermore, because the storage capacity of the register can correspond to a storage capacity of the completion queue, the storage system 116 can determine whether the completion queue is full without interacting or interrogating the completion queue or a component or element on which the completion queue is implemented (e.g., without interacting or interrogating a corresponding one of the host systems 104).
The hardware logic circuitry can be configured to, subsequent to storing the first completion in the register, perform a second determination whether the submission queue is full. The hardware logic circuitry can be configured to transfer the first completion from the register to the completion queue in response to the second determination being that the submission queue is not full. The hardware logic circuitry can be configured to store the first completion in the register in response to the second determination being that the completion queue is full. The hardware logic circuitry can be configured to retrieve the first completion from the first one of the submission queues in response to signaling indicative of the first completion transferred to the first one of the submission queues.
The hardware logic circuitry can be configured to retrieve a second completion from a second one of the completion queues and perform a determination whether the submission queue is full. The hardware logic circuitry can be configured to store the second completion in the register in response to determining that the submission queue is full and transfer the second completion from the register to the completion queue in response to determining that the completion queue is not full.
In some embodiments, a first completion can be retrieved from a submission queue of the storage system 116 in response to a first doorbell signal. Whether a submission queue of one of the host systems 104 can receive the first completion can be determined. The first completion can be transferred to a FIFO register of the storage system 116 in response to determining that the submission queue cannot receive the first completion. Subsequent to transferring the first completion to the FIFO register and determining whether the completion queue can receive the first completion, the first completion can be transferred from the FIFO register to the completion queue in response to determining that the completion queue can receive the first completion. Subsequent to the first doorbell signal and responsive to a second doorbell signal, a second completion can be retrieved from the submission queue. Whether the completion queue can receive the second completion can be determined and, responsive to determining that the completion queue cannot receive the second completion, the second completion can be transferred to the FIFO register. The first completion can be transferred to a first allocation of the FIFO register and the second completion can be transferred to a second allocation of the FIFO register in response to determining that the completion queue cannot receive the first completion and the second completion. Subsequent to transferring the first completion to the first allocation and the second completion to the second allocation, whether the completion queue can receive the first completion can be determined. The first completion can be transferred from the FIFO register to the completion queue in responsive to determining that the completion queue can receive the first completion. Subsequently whether the completion queue can receive the second completion can be determined. The second completion can be transferred from the FIFO register to the completion queue in response to determining that the completion queue can receive the second completion. The second completion can be transferred to the FIFO register in response to determining that the completion queue cannot receive the second completion. Subsequent to transferring the second completion to the FIFO register, determining whether the completion queue can receive the second completion. The second completion can be transferred from the FIFO register to the completion queue in response to determining that the completion queue can receive the second completion. The second completion can be transferred to the FIFO register in response to determining that the completion queue cannot receive the second completion.
The system controller 202 can include an interface controller 208, which is communicatively coupled to one or more of port 206-1, . . . , port 206-N. The port 206-1, . . . , port 206-N can be referred to collectively as the ports 206 and analogous to the ports 106 illustrated by
The interface controller 208 can be coupled to the ports 206 via respective command paths (e.g., command path 203-1, . . . , command path 203-N). The command path 203-1, . . ., the command path 203-N can be referred to collectively as the command paths 203. The command paths 203 can include physical paths (e.g., a wire or wires) that can be configured to pass physical functions and/or VFs between a respective one of the ports 206 and the interface controller 208. The command paths 203 can be configured to pass physical functions and/or VFs between a respective one of the ports 206 and the interface controller 208 in accordance with the NVMe standard. For example, in an SR-IOV deployment, the interface controller 208 can serve as multiple controllers (e.g., multiple NVMe controllers) for respective physical functions and/or each VF, such that the interface controller 208 provides multiple controller operations.
The interface controller 208 can be coupled to a data structure 210. As used herein, a “data structure” refers to a specialized format for organizing and/or storing data, which can be organized in rows and columns; however, embodiments of the present disclosure are not so limited. Examples of data structures include arrays, files, records, tables, trees, etc.
Although
The system controller 202 can include direct memory access (DMA) components that can allow for the system controller 202 to access main memory of a computing system (e.g., DRAM, SRAM, etc. of the host systems 104 of the computing system 100 illustrated by
The system controller 202 can include a completion management component 213. The completion management component 213 can be analogous to the completion management component 113 illustrated by
The firmware interface 350 can include submission queues 360-1 and 360-2 (commonly referred to as the submission queues 360). The submission queue 360-1 can be associated with a first port of the firmware interface 350 (e.g., the port 106-1). The first port can be associated with a first function of the storage system. The submission queue 360-2 can be associated with a second port of the firmware interface 350 (e.g., the port 106-N). The second port can be associated with a second function of the storage system. Each of the submission queues 360 can be coupled to a respective multiplexer. For example, a multiplexer 359-1 can be coupled to the submission queue 360-1 and a multiplexer 359-2 can be coupled to the submission queue 360-2. Although
The firmware interface 350 include a lookup circuitry 358 coupled to the retry FIFO 356 and configured to determine whether a completion queue of a host, such as the host 102, to which a completion is to be communicated is full. As described herein, a head pointer and a tail pointer can be used to determine whether the completion queue is full. The firmware interface 350 can include a register 362 (SCQ Tail Array) configured to store data values indicative of a tail pointer associated with the completion queue and a register 364 (SCQ Head Array) configured to store data values indicative of a head pointer associated with the completion queue. The firmware interface 350 can include a register 366 (SCQ Count Array) configured to store data values indicative of a count of completions stored in the completion queue. The firmware interface 350 can include circuitry 368 coupled to the registers 362, 364, and 366 and configured to update the tail pointer, the head pointer, and the count associated with the completion queue. The circuitry 368 can be configured to update the tail pointer, the head pointer, and the count associated with the completion queue in response a doorbell signal 367 from the completion queue (Host CQ Doorbell Write). The circuitry 368 can be configured to perform a hardware write to the registers 362, 364, and 366 to update the tail pointer, the head pointer, and the count associated with the completion queue, respectively. The circuitry 368 can be configured to issue firmware interrupts. The lookup circuitry 358 can be configured to perform hardware reads of the registers 362, 364, and 366 to determine whether the completion queue is full. The lookup circuitry 358 can be configured to provide a signal 357 indicative of whether the completion queue being full (CplQ Full) to the retry FIFO 356. The firmware interface 350 can include a register 369 (SCQ Config Array) configured to store data indicative of a configuration of the retry FIFO 356 (e.g., an order in which completions were received by the retry FIFO 356, an order in which to attempt communication of completions from the retry FIFO 356). The firmware interface 350 can include a register 355 configured to store data values indicative of a head pointer associated with the submission queues 360.
In response to the signal 357 being indicative of the completion queue being full, the completion can remain in the retry FIFO 356. However, the completion can be moved to a different allocation of the retry FIFO 356. In response to the signal 357 being indicative of the completion queue not being full, the retry FIFO 356 can be configured to transfer completions to TLP generation circuitry 370. The TLP generation circuitry 370 can be configured to transfer a completion TLP to the submission queues 360.
Each of the submission queues 360 can be coupled to a respective multiplexer. For example, a multiplexer 359-1 can be coupled to the submission queue 360-1 and a multiplexer 359-2 can be coupled to the submission queue 360-2. The TLP generation circuitry 370 can be configured, via a communication path 363, to cause a signal 361 to be provided to the multiplexer 359-1 and/or the multiplexer 359-2 to enable a completion TLP to be transferred from the TLP generation circuitry 370 to the submission queue 360-1 and/or the submission queue 360-2.
The retry FIFO 356 can include logic to cycle through the completions pending in the retry FIFO 356 when the doorbell signal 367 is received to prevent head-of-line blocking that can occur if only the head entry in the retry FIFO 356 is processed. In at least one embodiment, the submission queues 360 can be configured to transfer a completion back to the retry FIFO 356 in response to a corresponding one of the submission queues 360 being full (e.g., signal 365 (Port FIFOs full)).
At operation 481, the method 480 can include attempting communication of a first completion from a storage system (such as the storage system 116 illustrated by
In some embodiments, the method 480 can include, in response to unsuccessful communication of the second completion from the storage system to the host system, storing the second completion in the register. The method 480 can include attempting communication of the second completion from the register to the host system while the first completion is stored in the register. The first completion can be associated with a first transaction processed by the storage system and the second completion is associated with a second transaction processed by the storage system subsequent to the first transaction. The method 480 can include attempting communication of the second completion from the register to the host system subsequent to attempting communication of the first completion from the register to the host system.
The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
The example computer system 500 includes a processing device 502, a main memory 504 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 506 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage system 518, which communicate with each other via a bus 530.
The processing device 502 represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. The processing device 502 can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 502 is configured to execute instructions 526 for performing the operations and steps discussed herein. The computer system 500 can further include a network interface device 508 to communicate over the network 520.
The data storage system 518 can include a machine-readable storage medium 524 (also known as a computer-readable medium) on which is stored one or more sets of instructions 526 or software embodying any one or more of the methodologies or functions described herein. The instructions 526 can also reside, completely or at least partially, within the main memory 504 and/or within the processing device 502 during execution thereof by the computer system 500, the main memory 504 and the processing device 502 also constituting machine-readable storage media. The machine-readable storage medium 524, data storage system 518, and/or main memory 504 can correspond to the storage system 116 of
In one embodiment, the instructions 526 include instructions to implement functionality corresponding to a memory interface (e.g., the completion management component 113 of
Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems.
The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the disclosure as described herein.
The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc.
In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.