A distributed storage system may include a plurality of storage devices (e.g., storage arrays) to provide data storage to a plurality of nodes. The plurality of storage devices and the plurality of nodes may be situated in the same physical location, or in one or more physically remote locations. The plurality of nodes may be coupled to the storage devices by a high-speed interconnect, such as a switch fabric.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
One aspect may provide a method for receiving one or more input/output (I/O) requests by a storage system having at least one storage cluster. The storage system performs each I/O request with dynamic flow control by determining a latency associated with the one or more received I/O requests during at least one monitoring interval and tracking I/O requests to the storage cluster. If a received I/O request exceeds a choker threshold value of the storage cluster, the I/O request is queued. Otherwise, the received I/O request is performed for the storage cluster.
Another aspect may provide a system including a processor and a memory storing computer program code that when executed on the processor causes the processor to execute an input/output (I/O) request received by a storage system having at least one storage cluster. The system performs each I/O request with dynamic flow control by determining a latency associated with the one or more received I/O requests during at least one monitoring interval and tracking I/O requests to the storage cluster. If a received I/O request exceeds a choker threshold value of the storage cluster, the I/O request is queued. Otherwise, the received I/O request is performed for the storage cluster.
Another aspect may provide a computer program product including a non-transitory computer readable storage medium having computer program code encoded thereon that when executed on a processor of a computer causes the computer to execute an input/output (I/O) request received by a storage system having at least one storage cluster. The computer program product includes computer program code for receiving one or more input/output (I/O) requests. The computer program product includes computer program code for performing each I/O request with dynamic flow control by determining a latency associated with the one or more received I/O requests during at least one monitoring interval and tracking I/O requests to the storage cluster. If a received I/O request exceeds a choker threshold value of the storage cluster, the I/O request is queued. Otherwise, the received I/O request is performed for the storage cluster.
Objects, aspects, features, and advantages of embodiments disclosed herein will become more fully apparent from the following detailed description, the appended claims, and the accompanying drawings in which like reference numerals identify similar or identical elements. Reference numerals that are introduced in the specification in association with a drawing figure may be repeated in one or more subsequent figures without additional description in the specification in order to provide context for other features. For clarity, not every element may be labeled in every figure. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments, principles, and concepts. The drawings are not meant to limit the scope of the claims included herewith.
Storage system 100 may include at least one source site 102 and at least one target site 112, which may be co-located or geographically separated. Source site 102 may include one or more processors 105, storage application 106, and storage 108. In some embodiments, storage 108 may include one or more storage volumes 1351-S, that operate as active or production volumes. Source site 102 and target site 112 may be in communication with one or more hosts 113 via communication links 111 and 115, respectively.
Hosts 113 may perform input/output (I/O) operations on source-side storage 108 (e.g., read data from and write data to storage 108). In some embodiments, the I/O operations may be intercepted by and controlled by storage application 106. As changes are made to data stored on storage 108 via the I/O operations from hosts 113, or over time as storage system 100 operates, storage application 106 may perform operations to replicate data from source site 102 to target site 112 over communication link 110. In some embodiments, communication link 110 may be a long distance communication link of a storage area network (SAN), such as an Ethernet or Internet (e.g., TCP/IP) link that may employ, for example, the iSCSI protocol. In some embodiments, one or both of source site 102 and/or target site 112 may include one or more internal (e.g., short distance) communication links (shown as communication links 109 and 119), such as an InfiniBand (IB) link or Fibre Channel (FC) link. Communication link 109 may be employed to transfer data between storage volumes 1351-S of storage 108 and one or both of storage application 106 and processor(s) 105. Communication link 119 may be employed to transfer data between storage volumes 1391-Z of storage 137 and one or both of replica manager 116 and processor(s) 133.
In illustrative embodiments, target site 112 may include replica manager 116 that manages a plurality of replicas 1181-N according to a policy 114 (e.g., a replication and/or retention policy). Replicas 118 may be stored in one or more volumes 1391-Z of storage 137 of target site 112. A replica (or snapshot) may be created from data within storage 108 and transferred to one or more target sites 112 during a data replication cycle that may be performed based on data replication policies (e.g., policy 114) that may define various settings for data recovery operations. A data replication cycle may be asynchronous data replication performed at time-based intervals during operation of storage system 100, or may alternatively be synchronous data replication performed when data is changed on source site 102.
In illustrative embodiments, storage system 100 may include one or more consistency groups. A consistency group 147 may include one or more volumes 135 of source site 102, each associated with a corresponding volume 139 of target site 112. Consistency group 147 may treat source volumes 135 and target volumes 139 as a single logical entity for data replication and migration. Each volume 139 may store one or more associated replicas 118 that reflect the data in the consistency group 147 at a point in time (e.g., when the replica 118 was created). For example, replicas (e.g., snapshots) 118 may be generated for each source volume 135 of consistency group 147 at the same time, and stored on associated ones of target volumes 139. As shown in
Referring to
Referring back to
In some embodiments, payload data 314 may be segmented into one or more payload data segments to be written to storage 108 (e.g., by one or more write operations 153) or read from storage 108 (e.g., by one or more read operations 159). For example, if payload data 314 is 256 KB, payload data 314 may be segmented into sixteen 16 KB payload data segments to be written to storage 108. When I/O request 151 is a write request, processor(s) 105 and/or storage application 106 may then perform one or more corresponding write operations (e.g., write operation 153) to write payload data associated with the one or more data packets (e.g., one or more payload data segments) of I/O request 151 to storage 108. When I/O request 151 is a read request, processor(s) 105 and/or storage application 106 may then read data from storage 108 in one or more packets (e.g., one or more read operations 159) to process I/O request 151 from storage 108.
In illustrative embodiments, source site 102 may send a replica (e.g., replica 155) to target site 112. Similarly to write request 151, replica 155 may include one or more data packets such as shown in
Referring to
Described embodiments provide flow control to clusters 304. Illustrative embodiments may include one or more dynamic bandwidth limiters 3221-322Q, each associated with a given one of clusters 3041-304Q. Each one of dynamic bandwidth limiters 3221-322Q (generally referred to as dynamic bandwidth limiters 322) may include an associated I/O request queue, shown as queues 3241-324Q (generally referred to as queues 324). Although shown as employing a separate dynamic bandwidth limiter 322 for each one of clusters 304, other embodiments may employ other numbers of dynamic bandwidth limiters, for example, a single dynamic bandwidth limiter for multiple ones of clusters 304, as indicated by dashed line 320. In some embodiments, dynamic bandwidth limiters 3221-322Q may, for example, be implemented in storage application 106 or replica manager 116 (
As will be described, a given dynamic bandwidth limiter 322 may receive I/O requests for an associated cluster of storage array 302. Dynamic bandwidth limiter 322 may provide flow control for associated cluster(s) by preventing cluster overloading. For example, when a cluster is overloaded (e.g., processing too many concurrent I/O requests, or processing I/O request(s) that are too large), cluster instability, crashes, and degraded operation of storage system 100 may result. Flow control may be especially beneficial for clusters having strong time constraints, for example where latency of an I/O request must be less than a given latency threshold. However, system load level, and thus latency, strongly depends on a pattern of I/O requests. For example, a pattern of I/O requests having large read requests to a given cluster (e.g., requests to read large amounts of data from the cluster) mixed with small write requests to the same cluster (e.g., requests to write small amounts of data to the cluster) may overload the cluster, while other patterns of I/O requests that have the same overall data bandwidth may not overload the cluster. Flow control can become even more complex in distributed storage systems (e.g., as shown in
Described embodiments may employ dynamic bandwidth limiter 322 to provide autonomous and distributed flow control based upon average and/or peak latency. In some embodiments, the average and/or peak latency may be determined based upon an end-to-end feedback loop. In described embodiments, the end-to-end latency may be defined as an elapsed time for I/O requests to be completed by storage system 100. Some embodiments do not employ coordination or central management for flow control.
Described embodiments increase (and in a preferred embodiment may maximize) data throughput while maintaining latency of I/O requests below a configurable threshold. In some embodiments, a dynamic bandwidth limiter 322 associated with a given cluster 304. Each dynamic bandwidth limiter 322 prevents concurrent I/O requests for a cluster from exceeding a maximum value. For example, dynamic bandwidth limiter 322 may prevent more than a “choker size” amount of data from being processed concurrently for a cluster. For example, the choker size may be a threshold number of pages, sectors, or blocks, a threshold number of bytes, or a threshold number of I/O requests that may be processed concurrently for a given cluster.
In some embodiments, dynamic bandwidth limiter 322 may periodically update the choker size during operation of storage system 100. In some embodiments, the choker size parameter may be dynamically configurable during operation of storage system 100, for example based upon operating conditions of storage system 100 (or of one or more clusters of storage system 100). In some embodiments, each cluster may have an individually settable choker size value. In other embodiments, a choker size value may be associated with more than one cluster.
For example, dynamic bandwidth limiter 322 may update its associated choker size once per monitoring interval. In illustrative embodiments, a monitoring interval may be on the order of several seconds, for example, 5, 10, or 15 seconds. During each monitoring interval, an average end-to-end I/O latency and/or a peak end-to-end I/O latency may be determined. As described herein, in described embodiments, the end-to-end latency may be defined as an elapsed time for I/O requests to be completed by storage system 100. In some embodiments, the average and/or peak end-to-end latency may be determined for each cluster individually, for a group of two or more clusters, or for storage system 100 as a whole. At the end of each monitoring interval, dynamic bandwidth limiter 322 may update the choker size.
For example, if the average I/O latency during a given monitoring interval is below a configurable latency threshold, dynamic bandwidth limiter 322 may increase the choker size for an associated cluster. In other words, since the cluster had low latency, the choker size may be increased since additional I/O requests may be processed without exceeding the latency threshold. Similarly, average I/O latency during a given monitoring interval is above the configurable latency threshold, dynamic bandwidth limiter 322 may decrease the choker size for an associated cluster. In other words, since the cluster had high latency, the choker size may be decreased, such that fewer I/O requests may be processed and the I/O latency of the cluster may be reduced. Some embodiments may employ a single latency threshold, while other embodiments may employ multiple latency thresholds, for example a first threshold to determine whether to decrease the choker size (e.g., a high latency threshold), and a second threshold to determine whether to increase the choker size (e.g., a low latency threshold). For example, in an illustrative embodiment, the high latency threshold may be 1000 microseconds, and the low latency threshold may be 700 microseconds.
In some embodiments, the choker size may be adjusted within a limited range (e.g., between a minimum choker size and a maximum choker size), which may be based upon parameters of storage system 100. Some embodiments may adjust the choker size by predetermined adjustment steps. Some embodiments may provide hysteresis in the adjustment of the choker value by employing a first adjustment step for increasing the choker size, and a second adjustment step for decreasing the choker size. In an illustrative embodiment, the adjustment step for increasing the choker size may be smaller than the adjustment step for decreasing the choker size, for example to slowly increase the number of I/O requests that a cluster may concurrently process but quickly reduce the number of I/O requests that a cluster may concurrently process when the cluster is overloaded. For example, in an illustrative embodiment, the choker size may initially be set to 240 pages (e.g., 240 8 KB pages), and may have a range from a minimum choker size of 120 pages to a maximum choker size of 28672 pages. In an illustrative embodiment, the adjustment step size may be 20 pages.
In some embodiments, the sizes of the adjustment steps may be tuned based upon operating conditions of storage system 100. For example, dynamic bandwidth limiter 322 may adjust the adjustment steps to provide coarser (e.g., larger steps) adjustment of the choker size or finer (e.g., smaller steps) adjustment of the choker size, for example based upon how frequently the choker size has been updated.
Some embodiments may limit an amount the choker size may be adjusted in a given number of monitoring intervals, or may limit a number of times a given choker size may be adjusted in a given number of monitoring intervals.
Data (or I/O requests) in excess of the choker size for a given cluster may be either rejected or queued by the associated dynamic bandwidth limiter 322 until use of the cluster is below the choker size. To reduce sensitivity to traffic bursts, described embodiments of dynamic bandwidth limiter 322 may employ delayed rejection of excessive I/O requests. For example, dynamic bandwidth limiter 322 may queue a subsequent I/O request (e.g., in queue 324), and only reject a queued I/O request after a retention interval expires. For example, if a traffic burst causes numerous I/O requests in excess of the choker size, queued I/O requests may be able to be processed as the traffic burst subsides, and before the retention interval expires. However, in a system that is not experiencing bursty traffic, but instead is experiencing persistent overloading, queued I/O requests may be rejected as a retention interval expires for a given queued I/O request. In some embodiments, the retention interval may be set dynamically based upon operating conditions of storage system 100. In some embodiments, the retention interval may be set by a user of storage system 100. In an illustrative embodiment, the retention interval may be set in the range of several seconds. For example, the retention interval may be 3 seconds.
If, at block 506, the I/O request is excessive, then at block 508, dynamic bandwidth limiter 322 may queue the I/O request in queue 324. Block 508 is described in greater detail in regard to
At block 516, if the monitoring interval is complete, then at block 518, dynamic bandwidth limiter 322 may update an associated choker threshold. Block 518 is described in greater detail in regard to
If, at block 608, any of the queued I/O requests exceed the retention interval, then at block 610, the I/O request(s) exceeding the retention interval may be rejected. For example, each queued I/O request may be stored in queue 324 with a time stamp, and if a difference between a current time and the time stamp value reaches or exceeds the retention interval, then at block 610, dynamic bandwidth limiter 322 may reject the I/O request. Process 508′ continues to block 612. If, at block 608, none of the queued I/O requests have reached or exceeded the retention interval, then at block 612, process 508′ completes.
At block 706, if the determined I/O latency (e.g., the peak latency and/or average latency) for the monitoring interval is at or below the choker threshold, then at block 712, dynamic bandwidth limiter 322 may increase the choker threshold to allow the associated cluster 304 to accept additional concurrent I/O requests, and/or larger concurrent I/O requests. Process 518′ continues to block 714. If, at block 706, if the determined I/O latency (e.g., the peak latency and/or average latency) for the monitoring interval is not below the choker threshold, then at block 708, dynamic bandwidth limiter 322 may determine whether the determined I/O latency (e.g., the peak latency and/or average latency) for the monitoring interval is above the choker threshold.
If, at block 708, the determined I/O latency (e.g., the peak latency and/or average latency) for the monitoring interval is above the choker threshold, then at block 710, dynamic bandwidth limiter 322 may decrease the choker threshold to reduce the number of concurrent I/O requests, and/or the size of concurrent I/O requests, that the associated cluster 304 may accept. Process 518′ continues to block 714. If, at block 708, the determined I/O latency (e.g., the peak latency and/or average latency) for the monitoring interval is not above the choker threshold, then process 518′ continues to block 714. At block 714, process 518′ may complete.
As described, in some embodiments, dynamic bandwidth limiter 322 may employ a single latency threshold, while other embodiments may employ multiple latency thresholds, for example a first threshold to determine whether to decrease the choker size (e.g., a high latency threshold), and a second threshold to determine whether to increase the choker size (e.g., a low latency threshold).
As described, at blocks 710 and 712, dynamic bandwidth limiter 322 may decrease or increase, respectively, the choker threshold by predetermined adjustment steps. Some embodiments may provide hysteresis in the adjustment of the choker value by employing a first adjustment step for increasing the choker size, and a second adjustment step for decreasing the choker size. In an illustrative embodiment, the adjustment step for increasing the choker size may be smaller than the adjustment step for decreasing the choker size, for example to slowly increase the number of I/O requests that a cluster may concurrently process but quickly reduce the number of I/O requests that a cluster may concurrently process when the cluster is overloaded.
Referring to
Processes 400, 408′, 508′, and 518′ (
The processes described herein are not limited to the specific embodiments described. For example, processes 400, 408′, 508′, and 518′ are not limited to the specific processing order shown in
Processor 802 may be implemented by one or more programmable processors executing one or more computer programs to perform the functions of the system. As used herein, the term “processor” describes an electronic circuit that performs a function, an operation, or a sequence of operations. The function, operation, or sequence of operations may be hard coded into the electronic circuit or soft coded by way of instructions held in a memory device. A “processor” may perform the function, operation, or sequence of operations using digital values or using analog signals. In some embodiments, the “processor” can be embodied in an application specific integrated circuit (ASIC). In some embodiments, the “processor” may be embodied in a microprocessor with associated program memory. In some embodiments, the “processor” may be embodied in a discrete electronic circuit. The “processor” may be analog, digital or mixed-signal. In some embodiments, the “processor” may be one or more physical processors or one or more “virtual” (e.g., remotely located or “cloud”) processors.
While illustrative embodiments have been described with respect to processes of circuits, described embodiments may be implemented as a single integrated circuit, a multi-chip module, a single card, or a multi-card circuit pack. Further, as would be apparent to one skilled in the art, various functions of circuit elements may also be implemented as processing blocks in a software program. Such software may be employed in, for example, a digital signal processor, micro-controller, or general purpose computer. Thus, described embodiments may be implemented in hardware, a combination of hardware and software, software, or software in execution by one or more processors.
Some embodiments may be implemented in the form of methods and apparatuses for practicing those methods. Described embodiments may also be implemented in the form of program code, for example, stored in a storage medium, loaded into and/or executed by a machine, or transmitted over some transmission medium or carrier, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation. A non-transitory machine-readable medium may include but is not limited to tangible media, such as magnetic recording media including hard drives, floppy diskettes, and magnetic tape media, optical recording media including compact discs (CDs) and digital versatile discs (DVDs), solid state memory such as flash memory, hybrid magnetic and solid state memory, non-volatile memory, volatile memory, and so forth, but does not include a transitory signal per se. When embodied in a non-transitory machine-readable medium, and the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the method.
When implemented on a processing device, the program code segments combine with the processor to provide a unique device that operates analogously to specific logic circuits. Such processing devices may include, for example, a general purpose microprocessor, a digital signal processor (DSP), a reduced instruction set computer (RISC), a complex instruction set computer (CISC), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a programmable logic array (PLA), a microcontroller, an embedded controller, a multi-core processor, and/or others, including combinations of the above. Described embodiments may also be implemented in the form of a bitstream or other sequence of signal values electrically or optically transmitted through a medium, stored magnetic-field variations in a magnetic recording medium, etc., generated using a method and/or an apparatus as recited in the claims.
Various elements, which are described in the context of a single embodiment, may also be provided separately or in any suitable subcombination. It will be further understood that various changes in the details, materials, and arrangements of the parts that have been described and illustrated herein may be made by those skilled in the art without departing from the scope of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
4164763 | Briccetti et al. | Aug 1979 | A |
4608839 | Tibbals, Jr. | Sep 1986 | A |
4821178 | Levin et al. | Apr 1989 | A |
5319645 | Bassi et al. | Jun 1994 | A |
5537534 | Voigt et al. | Jul 1996 | A |
5539907 | Srivastava et al. | Jul 1996 | A |
5627995 | Miller | May 1997 | A |
5710724 | Burrows | Jan 1998 | A |
5732273 | Srivastava et al. | Mar 1998 | A |
5860137 | Raz | Jan 1999 | A |
5896538 | Blandy et al. | Apr 1999 | A |
5903730 | Asai et al. | May 1999 | A |
5940618 | Blandy et al. | Aug 1999 | A |
5987250 | Subrahmanyam | Nov 1999 | A |
5999842 | Harrison et al. | Dec 1999 | A |
6173217 | Bogin | Jan 2001 | B1 |
6182086 | Lomet et al. | Jan 2001 | B1 |
6226787 | Serra et al. | May 2001 | B1 |
6327699 | Larus et al. | Dec 2001 | B1 |
6353805 | Zahir et al. | Mar 2002 | B1 |
6470238 | Nizar | Oct 2002 | B1 |
6470478 | Bargh et al. | Oct 2002 | B1 |
6519766 | Barritz et al. | Feb 2003 | B1 |
6643654 | Patel et al. | Nov 2003 | B1 |
6654948 | Konuru et al. | Nov 2003 | B1 |
6658471 | Berry et al. | Dec 2003 | B1 |
6658654 | Berry et al. | Dec 2003 | B1 |
6801914 | Barga et al. | Oct 2004 | B2 |
6820218 | Barga et al. | Nov 2004 | B1 |
6870929 | Greene | Mar 2005 | B1 |
6892312 | Johnson | May 2005 | B1 |
7058665 | Lang | Jun 2006 | B1 |
7099797 | Richard | Aug 2006 | B1 |
7143410 | Coffman et al. | Nov 2006 | B1 |
7251663 | Smith | Jul 2007 | B1 |
7315795 | Homma | Jan 2008 | B2 |
7389497 | Edmark et al. | Jun 2008 | B1 |
7421681 | DeWitt, Jr. et al. | Sep 2008 | B2 |
7552125 | Evans | Jun 2009 | B1 |
7574587 | DeWitt, Jr. et al. | Aug 2009 | B2 |
7693999 | Park | Apr 2010 | B2 |
7714747 | Fallon | May 2010 | B2 |
7814218 | Knee et al. | Oct 2010 | B1 |
7827136 | Wang et al. | Nov 2010 | B1 |
7908436 | Srinivasan et al. | Mar 2011 | B1 |
8019939 | Jutzi | Sep 2011 | B2 |
8200923 | Healey et al. | Jun 2012 | B1 |
8250257 | Harel | Aug 2012 | B1 |
8335899 | Meiri et al. | Dec 2012 | B1 |
8468251 | Pijewski | Jun 2013 | B1 |
8478951 | Healey et al. | Jul 2013 | B1 |
8782224 | Pijewski | Jul 2014 | B2 |
9037822 | Meiri | May 2015 | B1 |
9104326 | Frank et al. | Aug 2015 | B2 |
9128942 | Pfau et al. | Sep 2015 | B1 |
9208162 | Hallak et al. | Dec 2015 | B1 |
9304889 | Chen | Apr 2016 | B1 |
9330048 | Bhatnagar | May 2016 | B1 |
9411623 | Ryan | Aug 2016 | B1 |
9537777 | Tohmaz | Jan 2017 | B1 |
9762460 | Pawlowski et al. | Sep 2017 | B2 |
9769254 | Gilbert et al. | Sep 2017 | B2 |
9785468 | Mitchell et al. | Oct 2017 | B2 |
20020056031 | Skiba et al. | May 2002 | A1 |
20020133512 | Milillo et al. | Sep 2002 | A1 |
20030023656 | Hutchison et al. | Jan 2003 | A1 |
20030028616 | Aoki | Feb 2003 | A1 |
20030079041 | Parrella, Sr. et al. | Apr 2003 | A1 |
20030145251 | Cantrill | Jul 2003 | A1 |
20040030721 | Kruger et al. | Feb 2004 | A1 |
20040230742 | Ikeuchi | Nov 2004 | A1 |
20040267835 | Zwilling et al. | Dec 2004 | A1 |
20050039171 | Avakian et al. | Feb 2005 | A1 |
20050102547 | Keeton et al. | May 2005 | A1 |
20050125626 | Todd | Jun 2005 | A1 |
20050144416 | Lin | Jun 2005 | A1 |
20050171937 | Hughes et al. | Aug 2005 | A1 |
20050177603 | Shavit | Aug 2005 | A1 |
20050193084 | Todd | Sep 2005 | A1 |
20060031653 | Todd et al. | Feb 2006 | A1 |
20060031787 | Ananth et al. | Feb 2006 | A1 |
20060047776 | Chieng et al. | Mar 2006 | A1 |
20060070076 | Ma | Mar 2006 | A1 |
20060123212 | Yagawa | Jun 2006 | A1 |
20060179195 | Sharma | Aug 2006 | A1 |
20060242442 | Armstrong et al. | Oct 2006 | A1 |
20070078982 | Aidun et al. | Apr 2007 | A1 |
20070208788 | Chakravarty et al. | Sep 2007 | A1 |
20070297434 | Arndt | Dec 2007 | A1 |
20080098183 | Morishita et al. | Apr 2008 | A1 |
20080163215 | Jiang et al. | Jul 2008 | A1 |
20080178050 | Kern et al. | Jul 2008 | A1 |
20080243952 | Webman et al. | Oct 2008 | A1 |
20080288739 | Bamba | Nov 2008 | A1 |
20090006745 | Cavallo et al. | Jan 2009 | A1 |
20090030986 | Bates | Jan 2009 | A1 |
20090049450 | Dunshea | Feb 2009 | A1 |
20090055613 | Maki et al. | Feb 2009 | A1 |
20090089458 | Sugimoto | Apr 2009 | A1 |
20090089483 | Tanaka et al. | Apr 2009 | A1 |
20090100108 | Chen et al. | Apr 2009 | A1 |
20090222596 | Flynn et al. | Sep 2009 | A1 |
20090319996 | Shafi et al. | Dec 2009 | A1 |
20100042790 | Mondal et al. | Feb 2010 | A1 |
20100088296 | Periyagaram et al. | Apr 2010 | A1 |
20100180145 | Chu | Jul 2010 | A1 |
20100199066 | Artan et al. | Aug 2010 | A1 |
20100205330 | Noborikawa et al. | Aug 2010 | A1 |
20100223619 | Jaquet et al. | Sep 2010 | A1 |
20100257149 | Cognigni et al. | Oct 2010 | A1 |
20110040903 | Sterns | Feb 2011 | A1 |
20110060722 | Li et al. | Mar 2011 | A1 |
20110078494 | Maki et al. | Mar 2011 | A1 |
20110083026 | Mikami et al. | Apr 2011 | A1 |
20110099342 | Ozdemir | Apr 2011 | A1 |
20110119679 | Muppirala | May 2011 | A1 |
20110161297 | Parab | Jun 2011 | A1 |
20110225122 | Denuit et al. | Sep 2011 | A1 |
20110289291 | Agombar et al. | Nov 2011 | A1 |
20120054472 | Altman et al. | Mar 2012 | A1 |
20120059799 | Oliveira et al. | Mar 2012 | A1 |
20120078852 | Haselton et al. | Mar 2012 | A1 |
20120124282 | Frank et al. | May 2012 | A1 |
20120278793 | Jalan et al. | Nov 2012 | A1 |
20120290546 | Smith et al. | Nov 2012 | A1 |
20120290798 | Huang et al. | Nov 2012 | A1 |
20120304024 | Rohleder et al. | Nov 2012 | A1 |
20130031077 | Liu et al. | Jan 2013 | A1 |
20130073527 | Bromley | Mar 2013 | A1 |
20130111007 | Hoffmann et al. | May 2013 | A1 |
20130138607 | Bashyam et al. | May 2013 | A1 |
20130151683 | Jain et al. | Jun 2013 | A1 |
20130151759 | Shim et al. | Jun 2013 | A1 |
20130246354 | Clayton et al. | Sep 2013 | A1 |
20130246724 | Furuya | Sep 2013 | A1 |
20130265883 | Henry et al. | Oct 2013 | A1 |
20130282997 | Suzuki et al. | Oct 2013 | A1 |
20130332610 | Beveridge | Dec 2013 | A1 |
20130339533 | Neerincx et al. | Dec 2013 | A1 |
20140032964 | Neerincx et al. | Jan 2014 | A1 |
20140040199 | Golab et al. | Feb 2014 | A1 |
20140040343 | Nickolov et al. | Feb 2014 | A1 |
20140108759 | Iwamitsu | Apr 2014 | A1 |
20140136759 | Sprouse et al. | May 2014 | A1 |
20140161348 | Sutherland et al. | Jun 2014 | A1 |
20140195484 | Wang et al. | Jul 2014 | A1 |
20140237201 | Swift | Aug 2014 | A1 |
20140297588 | Babashetty et al. | Oct 2014 | A1 |
20140359231 | Matthews | Dec 2014 | A1 |
20140380282 | Ravindranath Sivalingam et al. | Dec 2014 | A1 |
20150006910 | Shapiro | Jan 2015 | A1 |
20150088823 | Chen et al. | Mar 2015 | A1 |
20150089135 | Iizawa | Mar 2015 | A1 |
20150112933 | Satapathy | Apr 2015 | A1 |
20150121020 | Bita | Apr 2015 | A1 |
20150149739 | Seo et al. | May 2015 | A1 |
20150205816 | Periyagaram et al. | Jul 2015 | A1 |
20150249615 | Chen et al. | Sep 2015 | A1 |
20150324236 | Gopalan et al. | Nov 2015 | A1 |
20160042285 | Gilenson et al. | Feb 2016 | A1 |
20160062853 | Sugabrahmam et al. | Mar 2016 | A1 |
20160077745 | Patel | Mar 2016 | A1 |
20160080482 | Gilbert et al. | Mar 2016 | A1 |
20160188419 | Dagar et al. | Jun 2016 | A1 |
20160350391 | Vijayan et al. | Dec 2016 | A1 |
20160359968 | Chitti et al. | Dec 2016 | A1 |
20160366206 | Shemer et al. | Dec 2016 | A1 |
20170123704 | Sharma et al. | May 2017 | A1 |
20170139786 | Simon et al. | May 2017 | A1 |
20170161348 | Araki et al. | Jun 2017 | A1 |
20170235503 | Karr | Aug 2017 | A1 |
20170235673 | Patel | Aug 2017 | A1 |
Number | Date | Country |
---|---|---|
1804157 | Jul 2007 | EP |
WO 2010019596 | Feb 2010 | WO |
WO 2010040078 | Apr 2010 | WO |
WO 2012088528 | May 2012 | WO |
Entry |
---|
eMuse: QoS Guarantees for Shared Storage Servers; Feng et al.; 22nd International Conference on Advanced Information Networking and Applications; Mar. 25-28, 2008; pp. 264-269 (Year: 2008). |
Improving I/O performance with a conditional store buffer; Schaelicke et al.; 31st Annual ACM/IEEE International Symposium on Microarchitecture; Dec. 2, 1998 (Year: 1998). |
Buffering and Flow Control in Optical Switches for High Performance Computing; Ye et al.; IEEE/OSA Journal of Optical Communications and Networking, vol. 3., iss. 8; Aug. 2011 (Year: 2011). |
Storage performance virtualization via throughput and latency control; Zhang et al.; 13th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems; Sep. 27-29, 2005 (Year: 2005). |
U.S. Appl. No. 14/034,981, filed Sep. 24, 2013, Halevi et al. |
U.S. Appl. No. 14/037,577, filed Sep. 26, 2013, Ben-Moshe et al. |
U.S. Appl. No. 14/230,405, filed Mar. 31, 2014, Meiri et al. |
U.S. Appl. No. 14/230,414, filed Mar. 31, 2014, Meiri. |
U.S. Appl. No. 14/317,449, filed Jun. 27, 2014, Halevi et al. |
U.S. Appl. No. 14/494,895, filed Sep. 24, 2014, Meiri et al. |
U.S. Appl. No. 14/494,899, filed Sep. 24, 2014, Chen et al. |
U.S. Appl. No. 14/979,890, filed Dec. 28, 2015, Meiri et al. |
U.S. Appl. No. 15,001,784, filed Jan. 20, 2016, Meiri et al. |
U.S. Appl. No. 15/001,789, filed Jan. 20, 2016, Meiri et al. |
U.S. Appl. No. 15/085,168, filed Mar. 30, 2016, Meiri et al. |
U.S. Appl. No. 15/076,775, filed Mar. 22, 2016, Chen et al. |
U.S. Appl. No. 15/076,946, filed Mar. 22, 2016, Meiri. |
U.S. Appl. No. 15/085,172, filed Mar. 30, 2016, Meiri. |
U.S. Appl. No. 15/085,181, filed Mar. 30, 2016, Meiri et al. |
U.S. Appl. No. 15/085,188, filed Mar. 30, 2016, Meiri et al. |
U.S. Appl. No. 15/196,674, filed Jun. 29, 2016, Kleiner et al. |
U.S. Appl. No. 15/196,427, filed Jun. 29, 2016, Shveidel. |
U.S. Appl. No. 15/196,374, filed Jun. 29, 2016, Shveidel et al. |
U.S. Appl. No. 15/196,472, filed Jun. 29, 2016, Shveidel. |
PCT International Search Report and Written Opinion dated Dec. 1, 2011 for PCT Application No. PCT/IL2011/000692; 11 Pages. |
PCT International Preliminary Report dated May 30, 2013 for PCT Patent Application No. PCT/IL2011/000692; 7 Pages. |
U.S. Appl. No. 12/945,915; 200 Pages. |
U.S. Appl. No. 12/945,915; 108 Pages. |
U.S. Appl. No. 12/945,915; 67 Pages. |
Nguyen et al., “B+ Hash Tree: Optimizing Query Execution Times for on- Disk Semantic Web Data Structures:” Proceedings of the 6th International Workshop on Scalable Semantic Web Knowledge Base Systems; Shanghai, China, Nov. 8, 2010 16 Pages. |
Notice of Allowance dated Apr. 13, 2015 corresponding to U.S. Appl. No. 14/037,511; 11 Pages. |
Non-Final Office Action dated May 11, 2015 corresponding to U.S. Appl. No. 14/037,626; 13 Pages. |
Response to Office Action dated May 11, 2015 corresponding to U.S. Appl. No. 14/037,626; Response filed Jul. 20, 2015; 10 Pages. |
Notice of Allowance dated Oct. 26, 2015 corresponding to U.S. Appl. No. 14/037,626; 12 Pages. |
Office Action dated Jul. 22, 2015 corresponding to U.S. Appl. No. 14/034,961; 28 Pages. |
Response to Office Action dated Jul. 22, 2015 corresponding to U.S. Appl. No. 14/034,981; Response filed Dec. 22, 2015; 14 Pages. |
Office Action dated Sep. 1, 2015 corresponding to U.S. Appl. No. 14/230,414; 13 Pages. |
Response to Office Action dated Sep. 1, 2015 corresponding to U.S. Appl. No. 14/230,414; Response filed Jan. 14, 2018; 10 Pages. |
Restriction Requirement dated Sep. 24, 2015 corresponding to U.S. Appl. No. 14/230,405; 8 Pages. |
Response to Restriction Requirement dated Sep. 24, 2015 corresponding to U.S. Appl. No. 14/230,405;Response filed Oct. 6, 2015; 1 Pages. |
Office Action dated Dec. 1, 2015 corresponding to U.S. Appl. No. 14/230,405: 17 Pages. |
Office Action dated Feb. 4, 2016 corresponding to U.S. Appl. No. 14/037,577; 26 Pages. |
Notice of Allowance dated Feb. 10, 2016 corresponding to U.S. Appl. No. 14/494,899; 19 Pages |
Notice of Allowance dated Feb. 26, 2016 corresponding to U.S. Appl. No. 14/230,414; 8 Pages. |
Final Office Action dated Apr. 6, 2016 corresponding to U.S. Appl. No. 14/034,981; 38 Pages. |
Response filed on May 2, 2016 to the Non-Final Office Action dated Dec. 1, 2015; for U.S. Appl. No. 14/230,405; 8 pages. |
Response filed on May 2, 2016 to the Non-Final Office Action dated Feb. 4, 2016; for U.S. Appl. No. 14/037,577; 10 pages. |
U.S. Non-Final Office Action dated Dec. 1, 2017 for U.S. Appl. No. 14/979,890; 10 Pages. |
U.S. Non-Final Office Action dated Dec. 14, 2017 for U.S. Appl. No. 15/076,946; 28 Pages. |
U.S. Notice of Allowance dated Jan. 26, 2018 corresponding to U.S. Appl. No. 15/085,172; 8 Pages |
U.S. Notice of Allowance dated Jan. 24, 2018 corresponding to U.S. Appl. No. 15/085,181; 8 Pages. |
Response to U.S. Non-Final Office Action dated Nov. 1, 2017 corresponding to U.S. Appl. No. 15/196,374; Response filed Jan. 30, 2018; 14 Pages |
U.S. Notice of Allowance dated Feb. 21, 2018 corresponding to U.S. Appl. No. 15/196,427; 31 Pages. |
Office Action dated Nov. 1, 2017 corresponding to U.S. Appl. No. 15/196,374, 64 Pages. |
Response to U.S. Non-Final Office Action dated Dec. 14, 2017 for U.S. Appl. No. 15/076,946; Response filed Mar. 14, 2018; 11 pages. |
U.S. Non-Final Office Action dated Jan. 11, 2018 corresponding to U.S. Appl. No. 15/085,168; 14 pages |
U.S. Non-Final Office Action dated Dec. 29, 2017 corresponding to U.S. Appl. No. 15/196,674; 34 pages. |
U.S. Non-Final Office Action dated Jan. 8, 2018 corresponding to U.S. Appl. No. 15/196,472; 16 pages. |