System and method for improved storage access in multi core system

Information

  • Patent Grant
  • 10979503
  • Patent Number
    10,979,503
  • Date Filed
    Monday, April 1, 2019
    5 years ago
  • Date Issued
    Tuesday, April 13, 2021
    3 years ago
Abstract
A system and method for improving multi-core processor access to storages, the method including: assigning a unique memory space within a memory to each of a plurality of processor cores; initiating a shared queue pair (QP), comprising a shared send queue and a shared receive queue, between the plurality of processor cores and at least a storage, wherein the shared queue is accessible by the plurality of processor cores; sending an instruction on the shared send queue from a first core of the plurality of processor cores to the storage, the instruction comprising an interrupt destination on a memory space assigned to the first core; and receiving an interrupt at the interrupt destination from the storage in response to the instruction, wherein the interrupt is generated for the first core.
Description
TECHNICAL FIELD

The present disclosure relates generally to storage access and particularly to multi-core client devices accessing local and remote storage devices.


BACKGROUND

Typically, in client devices having multi-core processors, each core generates requests and receives responses, or interrupts. These requests and interrupts can be local or remote. Multiple send-receive queue pairs and completion queues are generated between each core and each of potentially many storage devices to which the client connects. While this overhead may not present a problem for local storage, network resources are usually more limited when applied remotely, and reducing use of resources is advantageous in many cases.


It would therefore be advantageous to provide a solution to allow multi-core client devices to access local and remote storage devices while utilizing less resources.


SUMMARY

A summary of several example embodiments of the disclosure follows. This summary is provided for the convenience of the reader to provide a basic understanding of such embodiments and does not wholly define the breadth of the disclosure. This summary is not an extensive overview of all contemplated embodiments, and is intended to neither identify key or critical elements of all embodiments nor to delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more embodiments in a simplified form as a prelude to the more detailed description that is presented later. For convenience, the term “certain embodiments” may be used herein to refer to a single embodiment or multiple embodiments of the disclosure.


Certain embodiments disclosed herein include a method for improving multi-core processor access to storages, the method including: assigning a unique memory space within a memory to each of a plurality of processor cores; initiating a shared queue pair (QP), comprising a shared send queue and a shared receive queue, between the plurality of processor cores and at least a storage, wherein the shared queue is accessible by the plurality of processor cores; sending an instruction on the shared send queue from a first core of the plurality of processor cores to the storage, the instruction comprising an interrupt destination on a memory space assigned to the first core; and receiving an interrupt at the interrupt destination from the storage in response to the instruction, wherein the interrupt is generated for the first core.


Certain embodiments disclosed herein also include a non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to perform a process, the process including: assigning a unique memory space within a memory to each of a plurality of processor cores; initiating a shared queue pair (QP), comprising a shared send queue and a shared receive queue, between the plurality of processor cores and at least a storage, wherein the shared queue is accessible by the plurality of processor cores; sending an instruction on the shared send queue from a first core of the plurality of processor cores to the storage, the instruction comprising an interrupt destination on a memory space assigned to the first core; and receiving an interrupt at the interrupt destination from the storage in response to the instruction, wherein the interrupt is generated for the first core.


Certain embodiments disclosed herein also include a system for improving multi-core processor access to storages, including: a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to: assign a unique memory space within a memory to each of a plurality of processor cores; initiate a shared queue pair (QP), comprising a shared send queue and a shared receive queue, between the plurality of processor cores and at least a storage, wherein the shared queue is accessible by the plurality of processor cores; send an instruction on the shared send queue from a first core of the plurality of processor cores to the storage, the instruction comprising an interrupt destination on a memory space assigned to the first core; and receive an interrupt at the interrupt destination from the storage in response to the instruction, wherein the interrupt is generated for the first core.





BRIEF DESCRIPTION OF THE DRAWINGS

The subject matter disclosed herein is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the disclosed embodiments will be apparent from the following detailed description taken in conjunction with the accompanying drawings.



FIG. 1 is a schematic illustration of a multi-core client device accessing local and remote storage devices according to an embodiment.



FIG. 2 is a schematic illustration of a plurality of processor cores utilizing a shared queue pair for accessing a storage according to an embodiment.



FIG. 3 is a flowchart of a computerized method for improved multi core access to storage devices according to an embodiment.





DETAILED DESCRIPTION

It is important to note that the embodiments disclosed herein are only examples of the many advantageous uses of the innovative teachings herein. In general, statements made in the specification of the present application do not necessarily limit any of the various claimed embodiments. Moreover, some statements may apply to some inventive features but not to others. In general, unless otherwise indicated, singular elements may be in plural and vice versa with no loss of generality. In the drawings, like numerals refer to like parts through several views.


A system with a multi-core processor is disclosed, which accesses a network accessible storage device. Each core is associated with a unique memory space. A core sends an instruction on a shared outgoing queue between the cores and the storage device, where the instruction includs an interrupt destination on the memory space assigned to the core. The shared queue is accessible by two or more of the cores. A write is received at the interrupt destination from the storage device, in response to executing the instruction. The write at the interrupt destination causes an interrupt to be generated for the core which sent the instruction to the storage device. By sharing a queue between a plurality of cores, computational overhead and network bandwidth can be reduced.



FIG. 1 is a schematic illustration of a multi-core client device 100 accessing local and remote storage devices according to an embodiment. The client 100 includes at least one processing circuitry (or processor) 110, for example, a central processing unit (CPU). In an embodiment, the processor 110 includes a plurality of cores 110-1 through 110-4. The processing circuitry 110 may be, or be a component of, a larger processing unit implemented with one or more processors. The one or more processors may be implemented with any combination of general-purpose microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate array (FPGAs), programmable logic devices (PLDs), controllers, state machines, gated logic, discrete hardware components, dedicated hardware finite state machines, or any other suitable entities that can perform calculations or other manipulations of information.


The processing circuitry 110 is coupled via a bus 105 to a memory 120. The memory 120 includes a plurality of memory portions 120-1 through 120-4, each corresponding to a core of the plurality of cores (e.g., memory portion 120-1 corresponds to core 110-1, and the like). In certain embodiments, the memory 120 may include a memory portion that contains instructions that, when executed by the processing circuitry 110, performs the method described in more detail herein. The memory 120 may be further used as a working scratch pad for the processing circuitry 110, a temporary storage, and others, as the case may be. The memory 120 may be a volatile memory such as, but not limited to random access memory (RAM), or non-volatile memory (NVM), such as, but not limited to, flash memory.


The processing circuitry 110 may be further coupled to a plurality of local storage devices 130-1 through 130-J, where ‘J’ is an integer equal to or greater than 1. In some embodiments, the client device 100 may not include a local storage device. A local storage device 130 may be, for example, a solid state disk (SSD), a magnetic hard drive disk (HDD), and the like.


The processing circuitry 110 may be further coupled with a network interface controller (NIC) 140. The NIC 140 is configured to provide access to remote storage devices through a network 150. In an embodiment, the network 150 is configured to provide connectivity of various sorts, as may be necessary, including but not limited to, wired or wireless connectivity, including, for example, local area network (LAN), wide area network (WAN), metro area network (MAN), worldwide web (WWW), Internet, and any combination thereof, as well as cellular connectivity. In certain embodiments, the NIC 140 includes a NIC processor 140-1 and a NIC memory 140-2. An NIC having an onboard processor to offload from the CPU cores of the client device is discussed in more detail for example in U.S. Non-Provisional patent application Ser. No. 14/934,830 and U.S. Provisional Patent Application 62/629,825, assigned to the common assignee and is hereby incorporated by reference.


The network 150 may provide connectivity with one or more remote storages devices, such as remote storage 160. A remote storage 160 may be accessible through a remote storage server (not shown), which can be configured to host one or more remote storage devices thereon.


The processing circuitry 110 and/or the memory 120 may also include machine-readable media for storing software. Software shall be construed broadly to mean any type of instructions, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Instructions may include code (e.g., in source code format, binary code format, executable code format, or any other suitable format of code). The instructions, when executed by the one or more processors, cause the processing system to perform the various functions described in further detail herein.


Typically each core accesses a storage through a queue pair (QP) and a completion queue (CQ), which is uniquely established between the core 110-1 and the storage device 130-1. However, it may be beneficial to reduce computational, and in case of a networked storage device, network bandwidth consumption, by consolidating multiple QPs and CQs into a shared QP and a shared CQ.



FIG. 2 is a schematic illustration of a plurality of processor cores 240-1 to 240-N, where N is an integer equal to or greater than 1, utilizing a shared queue pair and completion queue for accessing a storage according to an embodiment. The QP includes a send queue 210 and a receive queue 220. The send queue 210 is for transmitting instructions from the cores to the storage device 230, and the receive queue 220 is for receiving responses from the storage to the cores based on the transmitted instructions. A completion queue 250 may be generated for receiving indications that a request has been completed and that responses have been received.


In one embodiment, the QP is shared by a plurality of cores 240-1 through 240-N. In some embodiments, a shared QP may be established for each storage device. In certain embodiments, a first group of cores may share a first QP, and a second group of cores may share a second QP, where each QP directed at a different storage.


In the shown embodiment, a first core 240-1 initiates an instruction for the storage device 230. The instruction may include a payload, such as a write instruction for having one or more data blocks written to the storage device 230. The instruction may further include an interrupt destination. The interrupt destination is in a memory space assigned to the first core 240-1. Typically, memory spaces should not overlap. The interrupt destination indicates to the storage device 230 to return a response that will generate an interrupt to the first core 240-1 to a specific location in the memory assigned to the first core. By sending the interrupt to the interrupt destination, the cores are able to distinguish which interrupt sent from the storage device is designated to the appropriate core. This eliminates the need for each core to have a dedicated QP and CQ initiated between itself and the storage device 230.


In certain embodiments the storage device 230 may be a local storage device, in other embodiments it may be a remote storage device, and in yet others a combination of local storage devices and remote storage devices may be utilized. For example, when using a remote storage device, either the core or the NIC may determine a memory space associated with the core, and generate an instruction for the send queue 210 specifying that an interrupt should be written to the associated memory space. In certain embodiments, the CQ is optional, and the interrupt is a sufficient indication for completion.



FIG. 3 is an example flowchart 300 of a computerized method for improved multi core access to storage devices, implemented in accordance with an embodiment.


At S310, a plurality of processor cores in a multi-core client device are each assigned a memory space in the client device. In some embodiments, the memory space may be assigned dynamically. Assignment of memory space may be limited to a time window. For example, a memory space may be mapped to a remote direct memory access (RDMA) key, corresponding to a message signaled interrupts, e.g., MSI-X, address for the interrupt of a specific core. MSI-X is a protocol for in-band signaling of an interrupt, implemented according to PCI protocols.


At S320, a shared queue pair (QP) and completion queue (CQ) is initiated between a first group of the plurality of cores and a remote storage device. In some embodiments, the remote storage can be a virtual volume, which includes multiple physical remote storage devices. The virtual volume may also include one or more local storage devices and one or more remote storage devices. In such embodiments, a shared QP may be initiated between the plurality of cores and each storage device, or between any combination of the plurality of cores and any combination of storage devices. The shared QP allows the first group to share a bandwidth resource between the first group of cores and the storage device. By utilizing less QPs than one per core, the total latency is reduced and computational overhead may also decrease, as for instance, QP and CQ caching typically improves. A QP includes a send queue and a receive queue, as discussed in more detail above.


At S330, a first core generates an instruction for the storage device in the send queue. The instruction includes a data block address, and an interrupt destination. The interrupt destination is in a memory space associated with the first core. In this way, interrupts for a plurality of cores may be sent on a single queue pair, as the mapping of each core to a unique memory space into which the relevant memory address is written into to generate an interrupt to a specific core is what allows differentiation of the interrupts. Thus, each core receives the relevant interrupt notification. The instruction may be, for example, a ‘write’ instruction, which includes one or more blocks of data corresponding to the address to which the data should be written to. As another example, the instruction may be a ‘read’ instruction, including an address from which to read a data block, and in some embodiments, a memory address into which the data block should be read to.


At S340, an interrupt is generated and sent to the client device using an RDMA write. At S350, a check is performed to determine if additional instructions should be executed. If so, execution continues at S330, otherwise execution terminates.


The various embodiments disclosed herein can be implemented as hardware, firmware, software, or any combination thereof. Moreover, the software is preferably implemented as an application program tangibly embodied on a program storage unit or computer readable medium consisting of parts, or of certain devices and/or a combination of devices. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPUs”), a memory, and input/output interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU, whether or not such a computer or processor is explicitly shown. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit. Furthermore, a non-transitory computer readable medium is any computer readable medium except for a transitory propagating signal.


As used herein, the phrase “at least one of” followed by a listing of items means that any of the listed items can be utilized individually, or any combination of two or more of the listed items can be utilized. For example, if a system is described as including “at least one of A, B, and C,” the system can include A alone; B alone; C alone; A and B in combination; B and C in combination; A and C in combination; or A, B, and C in combination.


All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the principles of the disclosed embodiment and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosed embodiments, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.

Claims
  • 1. A method for improving multi-core processor access to storages, the method comprising: assigning a unique memory space within a memory to each of a plurality of processor cores;initiating a shared queue pair (QP) between the plurality of processor cores and at least a storage, the shared QP comprising a shared send queue and a shared receive queue, wherein the shared QP is accessible by the plurality of processor cores;sending an instruction on the shared send queue from a first core of the plurality of processor cores to the storage, the instruction comprising an interrupt destination on a memory space assigned to the first core; andreceiving an interrupt at the interrupt destination from the storage in response to the instruction, wherein the interrupt is generated for the first core.
  • 2. The method of claim 1, wherein the storage is any of a local storage and a remote storage.
  • 3. The method of claim 2, wherein the storage is a network storage accessible over a remote direct memory access (RDMA) network.
  • 4. The method of claim 3, wherein the shared QP is established between a network interface controller (NIC) connected to the network storage and the plurality of cores, the shared QP comprising the shared send queue and the shared receive queue.
  • 5. The method of claim 1, wherein the shared QP is a first QP, wherein the first QP is established between a first group of processing cores from among the plurality of processor cores and a first group of storages, wherein a second QP is established for a second group of processing cores from among the plurality of cores and a second group of storages.
  • 6. The method of claim 1, further comprising: establishing a shared completion queue (CQ) between the plurality of cores and the storage, wherein the shared CQ is configured to send and receive indications that a request has been completed and that responses to the request have been received.
  • 7. The method of claim 1, wherein the instruction is any of a write instruction and a read instruction.
  • 8. The method of claim 1, wherein the interrupt destination further includes a data block address within the memory space assigned to the first core.
  • 9. The method of claim 1, wherein each unique memory space is assigned to one of the plurality of processor cores dynamically.
  • 10. A non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to perform a process, the process comprising: assigning a unique memory space within a memory to each of a plurality of processor cores;initiating a shared queue pair (QP) between the plurality of processor cores and at least a storage, the shared QP comprising a shared send queue and a shared receive queue, wherein the shared QP is accessible by the plurality of processor cores;sending an instruction on the shared send queue from a first core of the plurality of processor cores to the storage, the instruction comprising an interrupt destination on a memory space assigned to the first core; andreceiving an interrupt at the interrupt destination from the storage in response to the instruction, wherein the interrupt is generated for the first core.
  • 11. A system for improving multi-core processor access to storages, comprising: a processing circuitry; anda memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to:assign a unique memory space within a memory to each of a plurality of processor cores;initiate a shared queue pair (QP) between the plurality of processor cores and at least a storage, the shared QP comprising a shared send queue and a shared receive queue, wherein the shared QP is accessible by the plurality of processor cores;send an instruction on the shared send queue from a first core of the plurality of processor cores to the storage, the instruction comprising an interrupt destination on a memory space assigned to the first core; andreceive an interrupt at the interrupt destination from the storage in response to the instruction, wherein the interrupt is generated for the first core.
  • 12. The system of claim 11, wherein the storage is any of a local storage and a remote storage.
  • 13. The system of claim 12, wherein the storage is a network storage accessible over a remote direct memory access (RDMA) network.
  • 14. The system of claim 13, wherein the shared QP is established between a network interface controller (NIC) connected to the network storage and the plurality of cores, the shared QP comprising the shared send queue and the shared receive queue.
  • 15. The system of claim 11, wherein the shared QP is a first QP, wherein the first QP is established between a first group of processing cores from among the plurality of processor cores and a first group of storages, wherein a second QP is established for a second group of processing cores from among the plurality of cores and a second group of storages.
  • 16. The system of claim 11, wherein the system if further configured to: establish a shared completion queue (CQ) between the plurality of cores and the storage, wherein the shared CQ is configured to send and receive indications that a request has been completed and that responses to the request have been received.
  • 17. The system of claim 11, wherein the instruction is any of a write instruction and a read instruction.
  • 18. The system of claim 11, wherein the interrupt destination further includes a data block address within the memory space assigned to the first core.
  • 19. The system of claim 11, wherein each unique memory space is assigned to one of the plurality of processor cores dynamically.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Application No. 62/658,068 filed on Apr. 16, 2018. This application is a continuation-in-part (CIP) of: (a) U.S. patent application Ser. No. 16/282,629 filed on Feb. 22, 2019, which is a continuation of U.S. patent application Ser. No. 14/934,830 filed on Nov. 6, 2015, now U.S. Pat. No. 10,237,347, which claims the benefit of U.S. Provisional Application No. 62/172,265 filed on Jun. 8, 2015; and(b) U.S. patent application Ser. No. 16/270,239 filed on Feb. 7, 2019, which claims the benefit of U.S. Provisional Application No. 62/629,825 filed on Feb. 13, 2018. The Ser. No. 16/270,239 application is a CIP of: (i) U.S. patent application Ser. No. 15/975,379 filed on May 9, 2018, which is a continuation of U.S. patent application Ser. No. 14/726,919 filed on Jun. 1, 2015, now U.S. Pat. No. 9,971,519, which claims benefit of U.S. Provisional Application No. 62/126,920 filed on Mar. 2, 2015, U.S. Provisional Application No. 62/119,412 filed on Feb. 23, 2015, U.S. Provisional Application No. 62/096,908 filed on Dec. 26, 2014, U.S. Provisional Application No. 62/085,568 filed on Nov. 30, 2014, and U.S. Provisional Application No. 62/030,700 filed Jul. 30, 2014;(ii) U.S. patent application Ser. No. 15/684,439 filed Aug. 23, 2017 which claims benefit of 62/381,011 filed on Aug. 29, 2016; and(iii) the aforementioned U.S. patent application Ser. No. 14/934,830. All of the applications referenced above are herein incorporated by reference.

US Referenced Citations (73)
Number Name Date Kind
5309451 Noya et al. May 1994 A
5717691 Dighe et al. Feb 1998 A
5745671 Hodges Apr 1998 A
5805788 Johnson Sep 1998 A
5889934 Peterson Mar 1999 A
6108812 Born Aug 2000 A
6839803 Loh et al. Jan 2005 B1
7515612 Thompson Apr 2009 B1
7539780 Makhervaks et al. May 2009 B2
7577667 Hinshaw et al. Aug 2009 B2
7590768 Gormley Sep 2009 B2
7710968 Cornett et al. May 2010 B2
7953915 Ge et al. May 2011 B2
8037154 Biran et al. Oct 2011 B2
8103785 Crowley et al. Jan 2012 B2
8122155 Marti Feb 2012 B1
8233380 Subramanian et al. Jul 2012 B2
8244986 Solihin Aug 2012 B2
8265095 Fritz et al. Sep 2012 B2
8307271 Liu et al. Nov 2012 B1
8407448 Hayden et al. Mar 2013 B1
8433848 Naamad et al. Apr 2013 B1
8706962 Belluomini et al. Apr 2014 B2
8775718 Kanevsky et al. Jul 2014 B2
8832216 Bugge Sep 2014 B2
8910031 Liu et al. Dec 2014 B1
9241044 Shribman et al. Jan 2016 B2
9462308 LaBosco et al. Oct 2016 B2
9467511 Tamir et al. Oct 2016 B2
9467512 Tamir et al. Oct 2016 B2
9529773 Hussain et al. Dec 2016 B2
9639457 Piszczek et al. May 2017 B1
20050038850 Oe et al. Feb 2005 A1
20050129039 Biran Jun 2005 A1
20060059408 Chikusa et al. Mar 2006 A1
20060230219 Njoku Oct 2006 A1
20060235999 Shah et al. Oct 2006 A1
20080109616 Taylor May 2008 A1
20080126509 Subramanian May 2008 A1
20080181245 Basso Jul 2008 A1
20090300023 Vaghani Dec 2009 A1
20100083247 Kanevsky et al. Apr 2010 A1
20110131377 Gray Jun 2011 A1
20120079143 Krishnamurthi et al. Mar 2012 A1
20120144233 Griffith et al. Jun 2012 A1
20120300633 Friedman Nov 2012 A1
20130019032 Han Jan 2013 A1
20130054726 Bugge Feb 2013 A1
20130073821 Flynn et al. Mar 2013 A1
20130198311 Tamir et al. Aug 2013 A1
20130198312 Tamir Aug 2013 A1
20130254321 Johnsen Sep 2013 A1
20130262614 Makhervaks Oct 2013 A1
20140089444 Makhervaks et al. Mar 2014 A1
20140211808 Koren Jul 2014 A1
20140297982 Duzett Oct 2014 A1
20140317336 Fitch Oct 2014 A1
20150006663 Huang Jan 2015 A1
20150026286 Sharp et al. Jan 2015 A1
20150030034 Bogdanski Jan 2015 A1
20150089121 Coudhury et al. Mar 2015 A1
20150319237 Hussain Nov 2015 A1
20160034418 Romem et al. Feb 2016 A1
20160036913 Romem et al. Feb 2016 A1
20160057224 Ori Feb 2016 A1
20160253267 Wood et al. Sep 2016 A1
20160266965 B et al. Sep 2016 A1
20160371226 Shalf Dec 2016 A1
20170093792 Marom Mar 2017 A1
20170134269 Bogdanski May 2017 A1
20170187496 Shalev Jun 2017 A1
20170289036 Vasudevan Oct 2017 A1
20180293188 Katayama Oct 2018 A1
Non-Patent Literature Citations (1)
Entry
Sathiamoorthy, et al., “XORing Elephants: Novel Erasure Codes for Big Data”, Proceedings of the VLDB Endowment, vol. 6, No. 5, 2013, pp. 325-336.
Related Publications (1)
Number Date Country
20190230161 A1 Jul 2019 US
Provisional Applications (9)
Number Date Country
62658068 Apr 2018 US
62629825 Feb 2018 US
62381011 Aug 2016 US
62172265 Jun 2015 US
62126920 Mar 2015 US
62119412 Feb 2015 US
62096908 Dec 2014 US
62085568 Nov 2014 US
62030700 Jul 2014 US
Continuations (2)
Number Date Country
Parent 14934830 Nov 2015 US
Child 15684439 US
Parent 14726919 Jun 2015 US
Child 14934830 US
Continuation in Parts (5)
Number Date Country
Parent 16282629 Feb 2019 US
Child 16371869 US
Parent 16270239 Feb 2019 US
Child 16282629 US
Parent 15975379 May 2018 US
Child 16270239 US
Parent 15684439 Aug 2017 US
Child 15975379 US
Parent 14934830 US
Child 14934830 US