Quality of service policy sets

Information

  • Patent Grant
  • 11886363
  • Patent Number
    11,886,363
  • Date Filed
    Monday, May 9, 2022
    2 years ago
  • Date Issued
    Tuesday, January 30, 2024
    3 months ago
Abstract
Disclosed are systems, computer-readable mediums, and methods for managing client performance in a storage system. In one example, the storage system receives a request from a client to write data to the storage system. The storage system estimates, based on a system metric associated with the storage system reflecting usage of the storage system, a requested write QoS parameter for storing the data by the storage system during a first time period. The storage system further determines a target write QoS parameter for the client based on the estimated requested write QoS parameter and an allocated write QoS parameter for the client. Then, the storage system independently regulates read performance and write performance of the client using a controller to adjust the write performance toward the determined target write QoS parameter within the first time period based on feedback regarding the estimated requested write QoS parameter.
Description
BACKGROUND

The following description is provided to assist the understanding of the reader. None of the information provided is admitted to be prior art.


In data storage architectures, a client's data may be stored in a volume. The client can access the client data from the volume via one or more volume servers coupled to the volume. The volume servers can map the locations of the data specified by the client, such as file name, drive name, etc., into unique identifiers that are specific to the location of the client's data on the volume. Using the volume server as an interface to the volume allows the freedom to distribute the data evenly over the one or more volumes. The even distribution of data can be beneficial in terms of volume and system performance.


Read and write requests of the client are typically transformed into read and/or write input/output operations (IOPS). For example, a file read request by a client can be transformed into one or more read IOPS of some size. Similarly, a file write request by the client can be transformed into one or more write IOPS.





BRIEF DESCRIPTION OF THE DRAWINGS

The foregoing and other features of the present disclosure will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings.



FIG. 1A depicts a simplified system for performance management in a storage system in accordance with an illustrative implementation.



FIG. 1B depicts a more detailed example of a system m accordance with an illustrative implementation.



FIG. 2 depicts a user interface for setting quality of service parameters in accordance with an illustrative implementation.



FIG. 3 depicts a simplified flowchart of a method of performing performance management in accordance with an illustrative implementation.



FIG. 4 depicts a more detailed example of adjusting performance using a performance manager in accordance with an illustrative implementation.



FIG. 5 depicts a performance curve comparing the size of input/output operations with system load in accordance with an illustrative implementation.



FIG. 6 depicts a simplified flowchart of a method of performing performance management that matches an overloaded system metric with a client metric in accordance with an illustrative implementation.



FIG. 7 A illustrates an example allocation of input-output operations to a client over a period of time tin accordance with an illustrative implementation.



FIG. 7B shows an example flow diagram of a process for independent control of read IOPS and write IOPS associated with a client in accordance with an illustrative implementation.



FIG. 7C shows example target IOPS associated with a client over a time period in accordance with an illustrative implementation.



FIG. 8 shows an example QoS Interface GUI 800 which may be configured or designed to enable service providers, users, and/or other entities to dynamically define and/or create different performance classes of use and/or to define performance/QoS related customizations in the storage system in accordance with an illustrative implementation.



FIG. 9 shows a portion of a storage system in accordance with an illustrative implementation.





DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS
Overview

In one embodiment, a method for managing input-output operations within a system including at least one client and storage system includes receiving, at a processor, a number of allocated total input-output operations (IOPS), a number of allocated read IOPS and a number of allocated write IOPS for at least one client accessing a storage system during a first time period. Each of the number of allocated read IOPS and the number of allocated write IOPS is not greater than the number of allocated total IOPS. The method further includes receiving, at the processor, a requested number of write IOPS associated with the at least one client's request to write to the storage system. The method also includes determining, at the processor, a target write IOPS based on the number of allocated total IOPS, the number of allocated write IOPS and the requested number of write IOPS. The method further includes executing, by the processor, the determined target write IOPS within the first time period.


In one or more embodiments, a system includes a storage system and a processor coupled to the storage system. The storage system is configured to store client data. The processor is configured to receive a number of allocated total input-output operations (IOPS), a number of allocated read IOPS and a number of allocated write IOPS for at least one client accessing a storage system during a first time period. Each of the number of allocated read IOPS and the number of allocated write IOPS is not greater than the number of allocated total IOPS. The processor is further configured to receiving a requested number of write IOPS associated with the at least one client's request to write to the storage system. The processor is additionally configured to determine a target write IOPS based on the number of allocated total IOPS, the number of allocated write IOPS, and the requested number of write IOPS. The processor is also configured to execute the determined target write IOPS within the first time period.


The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, implementations, and features described above, further aspects, implementations, and features will become apparent by reference to the following drawings and the detailed description.


Specific Example Embodiments

One or more different inventions may be described in the present application. Further, for one or more of the invention(s) described herein, numerous embodiments may be described in this patent application, and are presented for illustrative purposes only. The described embodiments are not intended to be limiting in any sense. One or more of the invention(s) may be widely applicable to numerous embodiments, as is readily apparent from the disclosure. These embodiments are described in sufficient detail to enable those skilled in the art to practice one or more of the invention(s), and it is to be understood that other embodiments may be utilized and that structural, logical, software, electrical and other changes may be made without departing from the scope of the one or more of the invention(s). Accordingly, those skilled in the art will recognize that the one or more of the invention(s) may be practiced with various modifications and alterations. Particular features of one or more of the invention(s) may be described with reference to one or more particular embodiments or figures that form a part of the present disclosure, and in which are shown, by way of illustration, specific embodiments of one or more of the invention(s). It should be understood, however, that such features are not limited to usage in the one or more particular embodiments or figures with reference to which they are described. The present disclosure is neither a literal description of all embodiments of one or more of the invention(s) nor a listing of features of one or more of the invention(s) that must be present in all embodiments.


Headings of sections provided in this patent application and the title of this patent application are for convenience only, and are not to be taken as limiting the disclosure in any way. Devices that are in communication with each other need not be in continuous communication with each other, unless expressly specified otherwise. In addition, devices that are in communication with each other may communicate directly or indirectly through one or more intermediaries. A description of an embodiment with several components in communication with each other does not imply that all such components are required. To the contrary, a variety of optional components are described to illustrate the wide variety of possible embodiments of one or more of the invention(s).


Further, although process steps, method steps, algorithms or the like may be described in a sequential order, such processes, methods and algorithms may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described in this patent application does not, in and of itself, indicate a requirement that the steps be performed in that order. The steps of described processes may be performed in any order practical. Further, some steps may be performed simultaneously despite being described or implied as occurring non-simultaneously (e.g., because one step is described after the other step). Moreover, the illustration of a process by its depiction in a drawing does not imply that the illustrated process is exclusive of other variations and modifications thereto, does not imply that the illustrated process or any of its steps are necessary to one or more of the invention(s), and does not imply that the illustrated process is preferred.


When a single device or article is described, it will be readily apparent that more than one device/article (whether or not they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described (whether or not they cooperate), it will be readily apparent that a single device/article may be used in place of the more than one device or article. The functionality and/or the features of a device may be alternatively embodied by one or more other devices that are not explicitly described as having such functionality/features. Thus, other embodiments of one or more of the invention(s) need not include the device itself.


Techniques and mechanisms described or reference herein will sometimes be described in singular form for clarity. However, it should be noted that particular embodiments include multiple iterations of a technique or multiple instantiations of a mechanism unless noted otherwise.


DETAILED DESCRIPTION

Described herein are techniques for a performance management storage system. In the following description, for purposes of explanation, numerous examples and specific details are set forth in order to provide a thorough understanding of various implementations. Particular implementations as defined by the claims may include some or all of the features in these examples alone or in combination with other features described below, and may further include modifications and equivalents of the features and concepts described herein.


Storage System



FIG. 1A depicts a simplified system for performance management in a storage system 100 in accordance with an illustrative implementation. System 100 includes a client layer 102, a metadata layer 104, a block server layer 106, and storage 116.


Before discussing how particular implementations manage performance of clients 108, the structure of a possible system is described. Client layer 102 includes one or more clients 108a-108n. Clients 108 include client processes that may exist on one or more physical machines. When the term “client” is used in the disclosure, the action being performed may be performed by a client process. A client process is responsible for storing, retrieving, and deleting data in system 100. A client process may address pieces of data depending on the nature of the storage system and the format of the data stored. For example, the client process may reference data using a client address. The client address may take different forms. For example, in a storage system that uses file storage, client 108 may reference a particular volume or partition, and a file name. With object storage, the client address may be a unique object name. For block storage, the client address may be a volume or partition, and a block address. Clients 108 communicate with metadata layer 104 using different protocols, such as small computer system interface (SCSI), Internet small computer system interface (ISCSI), fibre channel (FC), common Internet file system (CIFS), network file system (NFS), hypertext transfer protocol (HTTP), web-based distributed authoring and versioning (WebDAV), or a custom protocol.


Metadata layer 104 includes one or more metadata servers 110a-110n. Performance managers 114 may be located on metadata servers 110a-110n. Block server layer 106 includes one or more block servers 112a-112n. Block servers 112a-112n are coupled to storage 116, which stores volume data for clients 108. Each client 108 may be associated with a volume. In one implementation, only one client 108 accesses data in a volume; however, multiple clients 108 may access data in a single volume.


Storage 116 can include multiple solid state drives (SSDs). In one implementation, storage 116 can be a cluster of individual drives coupled together via a network. When the term “cluster” is used, it will be recognized that cluster may represent a storage system that includes multiple disks that may not be networked together. In one implementation, storage 116 uses solid state memory to store persistent data. SSDs use microchips that store data in non-volatile memory chips and contain no moving parts. One consequence of this is that SSDs allow random access to data in different drives in an optimized manner as compared to drives with spinning disks. Read or write requests to non-sequential portions of SSDs can be performed in a comparable amount of time as compared to sequential read or write requests. In contrast, if spinning disks were used, random read/writes would not be efficient since inserting a read/write head at various random locations to read data results in slower data access than if the data is read from sequential locations. Accordingly, using electromechanical disk storage can require that a client's volume of data be concentrated in a small relatively sequential portion of the cluster to avoid slower data access to non-sequential data. Using SSDs removes this limitation.


In various implementations, non-sequentially storing data in storage 116 is based upon breaking data up into one more storage units, e.g., data blocks. A data block, therefore, is the raw data for a volume and may be the smallest addressable unit of data. The metadata layer 104 or the client layer 102 can break data into data blocks. The data blocks can then be stored on multiple block servers 112. Data blocks can be of a fixed size, can be initially a fixed size but compressed, or can be of a variable size. Data blocks can also be segmented based on the contextual content of the block. For example, data of a particular type may have a larger data block size compared to other types of data. Maintaining segmentation of the blocks on a write (and corresponding re-assembly on a read) may occur in client layer 102 and/or metadata layer 104. Also, compression may occur in client layer 102, metadata layer 104, and/or block server layer 106.


In addition to storing data non-sequentially, data blocks can be stored to achieve substantially even distribution across the storage system. In various examples, even distribution can be based upon a unique block identifier. A block identifier can be an identifier that is determined based on the content of the data block, such as by a hash of the content. The block identifier is unique to that block of data. For example, blocks with the same content have the same block identifier, but blocks with different content have different block identifiers. To achieve even distribution, the values of possible unique identifiers can have a uniform distribution. Accordingly, storing data blocks based upon the unique identifier, or a portion of the unique identifier, results in the data being stored substantially evenly across drives in the cluster.


Because client data, e.g., a volume associated with the client, is spread evenly across all of the drives in the cluster, every drive in the cluster is involved in the read and write paths of each volume. This configuration balances the data and load across all of the drives. This arrangement also removes hot spots within the cluster, which can occur when client's data is stored sequentially on any volume.


In addition, having data spread evenly across drives in the cluster allows a consistent total aggregate performance of a cluster to be defined and achieved. This aggregation can be achieved, since data for each client is spread evenly through the drives. Accordingly, a client's I/O will involve all the drives in the cluster. Since, all clients have their data spread substantially evenly through all the drives in the storage system, a performance of the system can be described in aggregate as a single number, e.g., the sum of performance of all the drives in the storage system.


Block servers 112 and slice servers 124 maintain a mapping between a block identifier and the location of the data block in a storage medium of block server 112. A volume includes these unique and uniformly random identifiers, and so a volume's data is also evenly distributed throughout the cluster.


Metadata layer 104 stores metadata that maps between client layer 102 and block server layer 106. For example, metadata servers 110 map between the client addressing used by clients 108 (e.g., file names, object names, block numbers, etc.) and block layer addressing (e.g., block identifiers) used in block server layer 106. Clients 108 may perform access based on client addresses. However, as described above, block servers 112 store data based upon identifiers and do not store data based on client addresses. Accordingly, a client can access data using a client address which is eventually translated into the corresponding unique identifiers that reference the client's data in storage 116.


Although the parts of system 100 are shown as being logically separate, entities may be combined in different fashions. For example, the functions of any of the layers may be combined into a single process or single machine (e.g., a computing device) and multiple functions or all functions may exist on one machine or across multiple machines. Also, when operating across multiple machines, the machines may communicate using a network interface, such as a local area network (LAN) or a wide area network (WAN). In one implementation, one or more metadata servers 110 may be combined with one or more block servers 112 in a single machine. Entities in system 100 may be virtualized entities. For example, multiple virtual block servers 112 may be included on a machine. Entities may also be included in a cluster, where computing resources of the cluster are virtualized such that the computing resources appear as a single entity.



FIG. 1B depicts a more detailed example of system 100 according to one implementation. Metadata layer 104 may include a redirector server 120 and multiple volume servers 122. Each volume server 122 may be associated with a plurality of slice servers 124.


In this example, client 108a wants to connect to a volume (e.g., client address). Client 108a communicates with redirector server 120, identifies itself by an initiator name, and also indicates a volume by target name that client 108a wants to connect to. Different volume servers 122 may be responsible for different volumes. In this case, redirector server 120 is used to redirect the client to a specific volume server 122. To client 108, redirector server 120 may represent a single point of contact. The first request from client 108a then is redirected to a specific volume server 122. For example, redirector server 120 may use a database of volumes to determine which volume server 122 is a primary volume server for the requested target name. The request from client 108a is then directed to the specific volume server 122 causing client 108a to connect directly to the specific volume server 122. Communications between client 108a and the specific volume server 122 may then proceed without redirector server 120.


Volume server 122 performs functions as described with respect to metadata server 110. Additionally, each volume server 122 includes a performance manager 114. For each volume hosted by volume server 122, a list of block identifiers is stored with one block identifier for each logical block on the volume. Each volume may be replicated between one or more volume servers 122 and the metadata for each volume may be synchronized between each of the volume servers 122 hosting that volume. If volume server 122 fails, redirector server 120 may direct client 108 to an alternate volume server 122.


In one implementation, the metadata being stored on volume server 122 may be too large for one volume server 122. Thus, multiple slice servers 124 may be associated with each volume server 122. The metadata may be divided into slices and a slice of metadata may be stored on each slice server 124. When a request for a volume is received at volume server 122, volume server 122 determines which slice server 124 contains metadata for that volume. Volume server 122 then routes the request to the appropriate slice server 124. Accordingly, slice server 124 adds an additional layer of abstraction to volume server 122.


The above structure allows storing of data evenly across the cluster of disks. For example, by storing data based on block identifiers, data can be evenly stored across drives of a cluster. As described above, data evenly stored across the cluster allows for performance metrics to manage load in system 100. If the system 100 is under a load, clients can be throttled or locked out of a volume. When a client is locked out of a volume, metadata server 110 or volume server 122 may close the command window or reduce or zero the amount of read or write data that is being processed at a time for client 108. The metadata server 110 or the volume server 122 can queue access requests for client 108, such that IO requests from the client 108 can be processed after the client's access to the volume resumes after the lock out period.


Performance Metrics and Load of the Storage System


The storage system 100 can also include a performance manager 114 that can monitor clients' use of the storage system's resources. In addition, performance manager 114 can regulate the client's use of the storage system 100. The client's use of the storage system can be adjusted based upon performance metrics, the client's quality of service parameters, and the load of the storage system. Performance metrics are various measurable attributes of the storage system. One or more performance metrics can be used to calculate a load of the system, which, as described in greater detail below, can be used to throttle clients of the system.


Performance metrics can be grouped in different categories of metrics. System metrics is one such category. System metrics are metrics that reflect the use of the system or components of the system by all clients. System metrics can include metrics associated with the entire storage system or with components within the storage system. For example, system metrics can be calculated at the system level, cluster level, node level, service level, or drive level. Space utilization is one example of a system metric. The cluster space utilization reflects how much space is available for a particular cluster, while the drive space utilization metric reflects how much space is available for a particular drive. Space utilization metrics can also be determined for at the system level, service level, and the node level. Other examples of system metrics include measured or aggregated metrics such as read latency, write latency, input/output operations per second (IOPS), read IOPS, write IOPS, I/O size, write cache capacity, dedupe-ability, compressibility, total bandwidth, read bandwidth, write bandwidth, read/write ratio, workload type, data content, data type, etc.


IOPS can be real input/output operations per second that are measured for a cluster or drive. Bandwidth may be the amount of data that is being transferred between clients 108 and the volume of data. Read latency can be the time taken for the system 100 to read data from a volume and return the data to a client. Write latency can be the time taken for the system to write data and return a success indicator to the client. Workload type can indicate if IO access is sequential or random. The data type can identify the type of data being accessed/written, e.g., text, video, images, audio, etc. The write cache capacity refers to a write cache or a node, a block server, or a volume server. The write cache is relatively fast memory that is used to store data before it is written to storage 116. As noted above, each of these metrics can be independently calculated for the system, a cluster, a node, etc. In addition, these values can also be calculated at a client level.


Client metrics are another category of metrics that can be calculated. Unlike system metrics, client metrics are calculated taking into account the client's use of the system. As described in greater detail below, a client metric may include use by other client's that are using common features of the system. Client metrics, however, will not include use of non-common features of the system by other clients. In one implementation, client metrics can include the same metrics as the system metrics, but rather than being component or system wide, are specific to a volume of the client. For example, metrics such as read latency or write IOPS can be monitored for a particular volume of a client.


Metrics, both system and client, can be calculated over a period of time, e.g., 250 ms, 500 ms, 1 s, etc. Accordingly, different values such as a min, max, standard deviation, average, etc., can be calculated for each metric. One or more of the metrics can be used to calculate a value that represents a load of the storage system. Loads can be calculated for the storage system as a whole, for individual components, for individual services, and/or individual clients. Load values, e.g., system load values and/or client load values, can then be used by the quality of service system to determine if and how clients should be throttled.


As described in greater detail below, performance for individual clients can be adjusted based upon the monitored metrics. For example, based on a number of factors, such as system metrics, client metrics, and client quality of service parameters, a number of IOPS that can be performed by a client 108 over a period of time may be managed. In one implementation, performance manager 114 regulates the number of IOPS that are performed by locking client 108 out of a volume for different amounts of time to manage how many IOPS can be performed by client 108. For example, when client 108 is heavily restricted, client 108 may be locked out of accessing a volume for 450 milliseconds every 500 milliseconds and when client 108 is not heavily restricted, client 108 is blocked out of a volume every 50 milliseconds for every 500 milliseconds. The lockout effectively manages the number of IOPS that client 108 can perform every 500 milliseconds. Although examples using IOPS are described, other metrics may also be used, as will be described in more detail below.


The use of metrics to manage load in system 100 is possible because a client's effect on global cluster performance is predictable due to the evenness of distribution of data, and therefore, data load. For example, by locking out client 108 from accessing the cluster, the load in the cluster may be effectively managed. Because load is evenly distributed, reducing access to the client's volume reduces that client's load evenly across the cluster. However, conventional storage architectures where hot spots may occur result in unpredictable cluster performance Thus, reducing access by a client may not alleviate the hot spots because the client may not be accessing the problem areas of the cluster. Because in the described embodiment, client loads are evenly distributed through the system, a global performance pool can be calculated and individual client contributions to how the system is being used can also be calculated.


Client Quality of Service Parameters


In addition to system metrics and client metrics, client quality of service (QoS) parameters can be used to affect how a client uses the storage system. Unlike metrics, client QoS parameters are not measured values, but rather variables than can be set that define the desired QoS bounds for a client. Client QoS parameters can be set by an administrator or a client. In one implementation, client QoS parameters include minimum, maximum, and max burst values. Using IOPS as an example, a minimum IOPS value is a proportional amount of performance of a cluster for a client. Thus, the minimum IOPS is not a guarantee that the volume will always perform at this minimum IOPS value. When a volume is in an overload situation, the minimum IOPS value is the minimum number of IOPS that the system attempts to provide the client. However, based upon cluster performance, an individual client's IOPS may be lower or higher than the minimum value during an overload situation. In one implementation, the system 100 can be provisioned such that the sum of the minimum IOPS across all clients is such that the system 100 can sustain the minimum IOPS value for all clients at a given time. In this situation, each client should be able to perform at or above its minimum IOPS value. The system 100, however, can also be provisioned such that the sum of the minimum IOPS across all clients is such that the system 100 cannot sustain the minimum IOPS for all clients. In this case, if the system becomes overloaded through the use of all clients, the client's realized IOPS can be less than the client's minimum IOPS value. In failure situations, the system may also throttle users such that their realized IOPS are less than their minimum IOPS value. A maximum IOPS parameter is the maximum sustained IOPS value over an extended period of time. The max burst IOPS parameter is the maximum IOPS value that a client can “burst” above the maximum IOPS parameter for a short period of time based upon credits. In one implementation, credits for a client are accrued when the client is operating under their respective maximum IOPS parameter. Accordingly, a client will only be able to use the system in accordance with their respective maximum IOPS and maximum burst IOPS parameters. For example, a single client will not be able to use the system's full resources, even if they are available, but rather, is bounded by their respective maximum IOPS and maximum burst IOPS parameters.


As noted above, client QoS parameters can be changed at any time by the client or an administrator. FIG. 2 depicts a user interface 200 for setting client QoS in accordance with one illustrative implementation. The user interface 200 can include inputs that are used to change various QoS parameters. For example, slide bars 202 and/or text boxes 204 can be used to adjust QoS parameters. As noted above in one implementation, client QoS parameters include a minimum IOPS, a maximum IOPS, and a maximum burst IOPS. Each of these parameters can be adjusted with inputs, e.g., slide bars and/or text boxes. In addition, the IOPS for different size IO operations can be shown. In the user interface 200, the QoS parameters associated with 4 k sized IO operations are changed. When any performance parameter is changed, the corresponding IOPS for different sized IO operations are automatically adjusted. For example, when the burst parameter is changed, IOPS values 206 are automatically adjusted. Once the QoS parameters have been set, activating a save changes button 208 updates the client's QoS parameters. As described below, the target performance manager 402 can use the updated QoS parameters, such that the updated QoS parameters take effect immediately. The updated QoS parameters take effect without requiring any user data to be moved in the system.


Performance Management



FIG. 3 depicts a simplified flowchart 300 of a method of performing performance management according to one implementation. Additional, fewer, or different operations of the method 300 may be performed, depending on the particular embodiment. The method 300 can be implemented on a computing device. In one implementation, the method 300 is encoded on a computer-readable medium that contains instructions that, when executed by a computing device, cause the computing device to perform operations of the method 300.


At 302, performance manager 114 determines a client load based on one or more performance metrics. For example, performance manager 114 may calculate a client's load based on different performance metrics, such as IOPS, bandwidth, and latency. The metrics may be historical metrics and/or current performance metrics. Historical performance may measure previous performance for an amount of time, such as the last week of performance metrics. Current performance may be real-time performance metrics. Using these performance metrics, e.g., system metrics and/or client metrics, a load value is calculated.


At 303, performance manager 114 gathers information about health of the cluster. The health of the cluster may be information that can quantify performance of the cluster, such as a load value. The cluster health information may be gathered from different parts of system 100, and may include health in many different aspects of system 100, such as system metrics and/or client metrics. In addition, cluster health information can be calculated as a load value from the client and/or system metrics. The health information may not be cluster-wide, but may include information that is local to the volume server 122 that is performing the performance management. The cluster health may be affected; for example, if there is a cluster data rebuild occurring, total performance of the cluster may drop. Also, when data discarding, adding or removing of nodes, adding or removing of volumes, power failures, used space, or other events affecting performance are occurring, performance manager 114 gathers this information from the cluster.


At 304, performance manager 114 determines a target performance value. For example, based on the load values and client quality of service parameters, a target performance value is determined. The target performance value may be based on different criteria, such as load values, client metrics, system metrics, and quality of service parameters. The target performance value is the value at which performance manager 114 would like client 108 to operate. For example, the target performance may be 110 IOPS.


At 306, performance manager 114 adjusts the performance of client 108. For example, the future client performance may be adjusted toward the target performance value. If IOPS are being measured as the performance metric, the number of IOPS a client 108 performs over a period of time may be adjusted to the target performance value. For example, latency can be introduced or removed to allow the number of IOPS that a client can perform to fluctuate. In one example, if the number of IOPS in the previous client performance is 80 and the target performance value is 110 IOPS, then the performance of the client is adjusted to allow client 108 to perform more IOPS such that the client's performance moves toward performing 110 IOPS.


Traditional provisioning systems attempt to achieve a quality of service by placing a client's data on a system that should provide the client with the requested quality of service. A client requesting a change to their quality of service, therefore, can require that the client's data be moved from one system to another system. For example, a client that wants to greatly increase its quality of service may need to be moved to a more robust system to ensure the increased quality of service. Unlike the traditional provisioning systems, the performance manager can dynamically adjust quality of service for specific clients without moving the client's data to another cluster. Accordingly, quality of service for a client can be adjusted instantly, and a client can change QoS parameters without requiring manual intervention for those QoS parameters to take effect. This feature allows the client to schedule changes to their QoS parameters. For example, if a client performs backups on the first Sunday of every month from 2:00 am-4:00 am, they could have their QoS parameters automatically change just prior to the start of the backup and change back after the backup finishes. This aspect allows a client the flexibility to schedule changes to their QoS parameters based upon the client's need. As another example, the client can be presented with a turbo button. When selected, the turbo button increases the client's QoS parameters by some factor, e.g., 3, 4, 5, etc., or to some large amount. Clients could use this feature if their data needs were suddenly increased, such as when a client's website is experiencing a high number of visitors. The client could then unselect the turbo button to return to their original QoS parameters. Clients could be charged for how long they use the turbo button features. In another implementation, the turbo button remains in effect for a predetermined time before the client's original QoS parameters are reset.


In addition to the above examples, clients and/or administrators can set client QoS parameters based upon various conditions. In addition, as noted above client QoS parameters are not limited to IOPS. In different implementations, client QoS parameters can be bandwidth, latency, etc. According to different embodiments, the storage system may be configured or designed to allow service providers, clients, administrators and/or users, to selectively and dynamically configure and/or define different types of QoS and provisioning rules which, for example, may be based on various different combinations of QoS parameters and/or provisioning/QoS target types, as desired by a given user or client.


According to different embodiments, examples of client QoS parameters may include, but are not limited to, one or more of the following (or combinations there:

    • IOPS;
    • Bandwidth;
    • Write Latency;
    • Read Latency;
    • Write buffer queue depth;
    • I/O Size (e.g., amount of bytes accessed per second);
    • I/O Type (e.g., Read I/Os, Write I/Os, etc.);
    • Data Properties such as, for example, Workload Type (e.g., Sequential, Random); Dedupe-ability;
    • Compressibility; Data Content; Data Type (e.g., text, video, images, audio, etc.); etc.


According to different embodiments, examples of various provisioning/QoS target types may include, but are not limited to, one or more of the following (or combinations thereof):

    • Service or group of Services;
    • Client or group of Clients;
    • Connection (e.g. Client connection);
    • Volume, or group of volumes;
    • Node or group of nodes;
    • Account/Client;
    • User;
    • iSCSI Session;
    • Time segment;
    • Read IOPS amount;
    • Write IOPS amount;
    • Application Type;
    • Application Priority;
    • Region of Volume (e.g., Subset of LBAs);
    • Volume Session(s);
    • I/O size;
    • Data Property type;
    • etc.



FIG. 8 shows an example QoS Interface GUI 800 which may be configured or designed to enable service providers, users, and/or other entities to dynamically define and/or create different performance classes of use and/or to define performance/QoS related customizations in the storage system. In at least one embodiment, the QoS Interface GUI 800 may be configured or designed to allow service providers, users, and/or other entities dynamically switch between the different performance classes of use, allowing such clients to dynamically change their performance settings on the fly (e.g., in real-time).


For example, according to various embodiments, a service provider may dynamically define and/or create different performance classes of use in the storage system, may allow clients to dynamically switch between the different performance classes of use, allowing such clients to dynamically modify or change their performance settings on the fly (e.g., in real-time). In at least one embodiment, the storage system is configured or designed to immediately implement the specified changes for the specified provisioning/QoS Targets, and without requiring the client's storage volume to be taken off-line to implement the performance/QoS modifications. In at least one embodiment, the different performance classes of use may each have associated therewith a respective set of QoS and/or provisioning rules (e.g., 810) which, for example, may be based on various different combinations of QoS parameters and/or provisioning/QoS target types.


In the context of the present example, the respective set of QoS and/or provisioning rules 810 include an “IF” portion 820 and a “THEN” portion 840. The “IF” portion 820 includes a first conditional expression 88 and a second conditional expression 823 connected by a Boolean operator B 825. The first conditional expression 88 includes a boundary condition A 824, an operator A 826, and a threshold value A 828. The plus symbol 823 may be selected to add additional conditional expressions. When the one or more conditional expressions of the “IF” portion 820 evaluate to true, the “THEN” portion is performed. The “THEN” portion 840 includes a first performance setting 841 and a second performance setting 843 connected by a Boolean operator D 845. The first performance setting 841 includes a QoS parameter type C 842 that is to be set for provisioning/QoS target C 844 based on operator C 846 to a threshold value C 848. The plus symbol 847 may be selected to add additional performance settings.


The above process for performing performance management may be performed continuously over periods of time. For example, a period of 500 milliseconds is used to evaluate whether performance should be adjusted. As will be described in more detail below, client 108 may be locked out of performing IOPS for a certain amount of time each period to reduce or increase the number of IOPS being performed.


Examples of different types of conditions, criteria and/or other information which may be used to configure the QoS Interface GUI of FIG. 8 may include, but are not limited to, one or more of the following (or combinations thereof):


Example Boundary Conditions (e.g., 824)


















LOAD(Service);
Date



LOAD(Read);
Read IOPS



LOAD(Write);
Write IOPS



LOAD(Write_Buffer);
Application Type



LOAD(Client-Read);
Application Priority



LOAD(Client-Write);
Region of Volume



LOAD(Client);
LBA ID



LOAD(Cluster);
Volume Session ID



LOAD(System)
Connection ID



Write Latency;
I/O size



Read Latency;
I/O Type



Write buffer queue depth;
Workload Type



LOAD(Client);
Dedupe-ability



Volume ID
Compressability



Group ID
Data Content



Account ID
Data Type



Client ID
Data Properties



User ID
Detectable Condition and/or Event



iSCSI Session ID
Etc.



Time










Example QoS Parameters (e.g., 842)


















MAX IOPS
MAX Read I/O



MIN IOPS
MIN Read I/O



BURST IOPS
BURST Read I/O



MAX Bandwidth
MAX Write I/O



MIN Bandwidth
MIN Write I/O



BURST Bandwidth
BURST Write I/O



MAX Latency
I/O Type



MIN Latency
Workload Type



BURST Latency
Dedupe-ability



MAX I/O Size
Compressability



MIN I/O Size
Data Content



BURST I/O Size
Data Type



I/O Type
Billing Amount










Example Provisioning/QoS Targets (e.g., 844)


















Cluster ID
Application Type



Service ID
Application Priority



Client ID
Region of Volume



Connection ID
LBA ID



Node ID
Volume Session ID



Volume ID
Connection ID



Group ID
I/O size



Account ID
I/O Type



Client ID
Workload Type



User ID
Dedupe-ability



iSCSI Session ID
Compressability



Time
Data Content



Date
Data Type



Read IOPS
Data Properties



Write IOPS
Etc.










Example Operators (e.g., 826, 846)


















Equal To
Not Equal To



Less Than
Contains



Greater Than
Does Not Contain



Less Than or Equal To
Matches



Greater Than or Equal To
Regular Expression(s)



Within Range of










Example Threshold Values (e.g., 828, 848)















Alpha-numeric value(s)
Random Type


Numeric value(s)
Text Type


Numeric Range(s)
Video Type


Numeric value per Time Interval value
Audio Type


(e.g., 5000 IOPS/sec)
Image Type


Sequential Type
Performance Class of Use Value









Example Boolean Operators (e.g., 825, 845)


















AND
NAND



OR
NOR



XOR
XNOR



NOT



EXCEPT










The following example scenarios help to illustrate the various features and functionalities enabled by the QoS Interface GUI 800, and help to illustrate the performance/QoS related provisioning features of the storage system:


Example A—Configuring/provisioning the storage system to automatically and/or dynamically increase storage performance to enable a backup to go faster during a specified window of time. For example, in one embodiment, the speed of a volume backup operation may be automatically and dynamically increased during a specified time interval by causing a MAX IOPS value and/or MIN IOPS value to be automatically and dynamically increased during that particular time interval.


Example B—Configuring/provisioning the storage system to automatically and/or dynamically enable a selected initiator to perform faster sequential I/Os from 10 pm to Midnight.


Example C—Configuring/provisioning the storage system to automatically and/or dynamically enable a selected application to have increased I/O storage performance.


Example D—Configuring/provisioning the storage system to automatically and/or dynamically enable a selected group of clients to have their respective MAX, MIN and BURST IOPS double on selected days/dates of each month.


Example E—Configuring/provisioning the storage system to present a client or user with a “Turbo Boost” interface which includes a virtual Turbo Button. Client may elect to manually activate the Turbo Button (e.g., on the fly or in real-time) to thereby cause the storage system to automatically and dynamically increase the level of performance provisioned for that Client. For example, in one embodiment, client activation of the Turbo Button may cause the storage system to automatically and dynamically increase the client's provisioned performance by a factor of 3× for one hour. In at least one embodiment, the dynamic increase in provisioned performance may automatically cease after a predetermined time interval. In at least one embodiment, the storage system may be configured or designed to charge the client an increased billing amount for use of the Turbo Boost service/feature.


Example F—Configuring/provisioning the storage system to automatically and/or dynamically charge an additional fee or billing amount for dynamically providing increased storage array performance (e.g., to allow a faster backup) to go faster at a particular time.


Example G—Configuring/provisioning the storage system to automatically and/or dynamically charge an additional fee or billing amount for IOPS and/or I/O access of the storage system which exceeds minimum threshold value(s) during one or more designated time intervals.


Performance manager 114 may use different ways of adjusting performance FIG. 4 depicts a more detailed example of adjusting performance using performance manager 114 according to one implementation. A target performance manager 402 determines a target performance value. In one implementation, target performance manager 402 uses the client's QoS parameters, system metrics, and client metrics to determine the target performance value. System metrics and client metrics can be used to determine the system load and client load. As an example, client load can be measured based on a client metrics, such as in IOPS, bytes, or latency in milliseconds.


In one implementation, system metrics are data that quantifies the current load of the cluster. Various system load values can be calculated based upon the system metrics. The load values can be normalized measures of system load. For example, different load values can be compared to one another, even if the load values use different metrics in their calculations. As an example, system load can be expressed in a percentage based on the current load of the cluster. In one example, a cluster that is overloaded with processing requests may have a lower value than when the system is not overloaded. In another implementation, the target performance manger 402 receives calculated load values as input, rather than system and/or client metrics.


The target performance manager 402 can read the client QoS parameters, relevant system metrics, and relevant client metrics. These values can be used to determine the target performance value for client 108. The QoS parameters may also be dynamically adjusted during runtime by the administrator or the client as described above, such as when a higher level of performance is desired (e.g., the customer paid for a higher level of performance). The calculation of the target performance value is explained in greater detail below.


In one implementation, the target performance manager 402 outputs the target performance value to a proportion-integral-derivative (PID) controller block 404. PID controller block 404 may include a number of PID controllers for different performance metrics. Although PID controllers are described, other controllers may be used to control the performance of clients 108. In one example, PID controller block 404 includes PID controllers for IOPS, bandwidth, and latency. Target performance manager 402 outputs different target performance values for the performance metrics into the applicable PID controllers. The PID controllers also receive information about previous and/or current client performance and the target performance value. For example, the PID controllers can receive client metrics, system metrics, and/or load values, that correspond with the target performance value. The PID controller can then determine a client performance adjustment value. For example, a PID controller is configured to take feedback of previous client performance and determine a value to cause a system to move toward the target performance value. For example, a PID can cause varied amounts of pressure to be applied, where pressure in this case causes client 108 to slow down, speed up or stay the same in performing IOPS. As an example, if the target performance value is 110 IOPS and client 108 has been operating at 90 IOPS, then the client performance adjustment value is output, which by being applied to the client 108 should increase the number of IOPS being performed.


In one implementation, PID controller block 404 outputs a performance adjustment value. As an example, the performance adjustment value can be a pressure value that indicates an amount of time that the client is locked out performing IO operations within the storage system. This lock out time will cause client performance to move toward the target performance value. For example, a time in milliseconds is output that is used to determine how long to lock a client 108 out of a volume. Locking a client out of performing IO operations artificially injects latency into the client's IO operations. In another of implementations, the performance adjustment value can be a number of IO operations that the client can perform in a period of time. If the client attempts to do more IO operations, the client can be locked out of doing those IO operations until a subsequent period of time. Locking client 108 out of the volume for different times changes the number of IOPS performed by client 108. For example, locking client 108 out of the volume for shorter periods of time increases the number of IOPS that can be performed by client 108 during that period.


A performance controller 406 receives the performance adjustment value and outputs a client control signal to control the performance of client 108. For example, the amount of lockout may be calculated and applied every half second. In one implementation, clients 108 are locked out by closing and opening a command window, such as an Internet small computer system interface (iSCSI) command window. Closing the command window does not allow a client 108 to issue access requests to a volume and opening the command window allows a client 108 to issue access requests to the volume. Locking clients 108 out of a volume may adjust the number of IOPS, bandwidth, or latency for client 108. For example, if a client 108 is locked out of a volume every 50 milliseconds of every 500 milliseconds as compared to being locked out of the volume for 450 milliseconds of every 500 milliseconds, the client may issue more IOPS. For a bandwidth example, if bandwidth is constrained, then client 108 is locked out of a volume for a longer period of time to increase available bandwidth. In another implementation, the amount of data that is being serviced at a time is modified, either to zero or some number, to affect the performance at which the system services that client's IO.


As described above, IOPS are metrics that can be used to manage performance of a client. IOPS include both write IOPS and read IOPS. Individual input/output operations do not have a set size. That is, an input operation can be writing 64 k of data to a drive, while another input operation can be writing 4 k of data to the drive. Accordingly, capturing the raw number of input/output operations over a period of time does not necessarily capture how expensive the IO operation actually is. To account for this situation, an input/output operation can be normalized based upon the size of the I/O operation. This feature allows for consistent treatment of IOPS, regardless of each operation's size of the data. This normalization can be achieved using a performance curve. FIG. 5 depicts a performance curve 500 comparing the size of input/output operations, with system load in accordance with an illustrative implementation. Line 504 indicates the system at full load, while line 502 indicates the load of the system for 10 operations of differing sizes. The performance curve can be determined based upon empirical data of the system 100. The performance curve allows IOPS of different sizes to be compared and to normalize IOPS of different sizes. For example, an IOP of size 32 k is roughly five times more costly than a 4 k IOP. That is, the number of IOPS of size 32 k to achieve 100% load of a system is roughly 20% of the number of IOPS of size 4 k. This is because larger block sizes have a discount of doing IP and not having to process smaller blocks of data. In various implementations, this curve can be used as a factor in deciding a client's target performance value. For example, if the target performance value for a client is determined to be 1,000 IOPS, this number can be changed based upon the average size of IOs the client has done in the past. As an example, if a client's average IO size is 4 k, the client's target performance value can remain at 1,000 IOPS. However, if the client's average IO size is determined to be 32 k, the client's target performance value can be reduced to 200 IOPS, e.g., 1,000*0.2. The 200 IOPS of size 32 k is roughly equivalent to 1,000 IOPS of size 4 k.


In determining a target performance value, the target performance manager 402 uses a client's QoS parameters to determine the target performance value for a client. In one implementation, an overload condition is detected and all clients are throttled in a consistent manner. For example, if the system load is determined to be at 20%, all clients may be throttled such that their target performance value is set to 90% of their maximum IOPS setting. If the system load increases to 50%, all clients can be throttled based upon setting their target performance value to 40% of their maximum IOPS setting.


Clients do not have to be throttled in a similar manner. For example, clients can belong to different classes of uses. Tn one implementation, classes of uses can be implemented simply by setting the QoS parameters of different clients differently. For example, a premium class of use could have higher QoS parameters, e.g., min IOPS, max IOPS, and burst IOPS, values compared to a normal class of use. In another implementation, the class of use can be taken into account when calculating the target performance value. For example, taking two different classes, one class could be throttled less than the other class. Using the example scenario above, clients belonging to the first class could be throttled 80% of their maximum IOPS value when the system load reaches 20%. The second class of clients, however, may not be throttled at all or by a different amount, such as 95% of their maximum IOPS value.


In another implementation, the difference between a client's minimum IOPS and maximum IOPS can be used to determine how much to throttle a particular client. For example, a client with a large difference can be throttled more than a client whose difference is small. In one implementation, the difference between the client's maximum IOPS and minimum IOPS is used to calculate a factor that is applied to calculate the target performance value. In this implementation, the factor can be determined as the IOPS difference divided by some predetermined IOPS amount, such as 5,000 IOPS. In this example, a client whose difference between their maximum IOPS and their minimum IOPS was 10,000, would be throttled twice as much as a client whose IOPS difference was 5,000. Clients of the system can be billed different amounts based upon their class. Accordingly, clients could pay more to be throttled later and/or less than other classes of clients.


In another implementation, throttling of clients can be based upon the client's use of the system. In this implementation, the target performance manager 402 can review system metrics to determine what metrics are currently overloaded. Next, the client metrics can be analyzed to determine if that client is contributing to an overloaded system value. For example, the target performance manager 402 can determine that the system is overloaded when the cluster's write latency is overloaded. The read/write IOPS ratio for a client can be used to determine if a particular client is having a greater impact on the overload condition. Continuing this example, a client whose read/write IOPS ratio was such that the client, vas doing three times more writes than reads and was doing 1,500 writes would be determined to be negatively impacting the performance of the cluster. Accordingly, the target performance manager 402 could significantly throttle this client. In one implementation, this feature can be done by calculating a factor based upon the read/write IOPS ratio. This factor could be applied when calculating the target performance value, such that the example client above would be throttled more than a client whose read/write IOPS ratio was high. In this example, a high read/write IOPS ratio indicates that the client is doing more reads than writes. The factor can also be based upon the number of IOPS that each client is doing. In addition, the number of IOPS for a particular client can be compared to the number of IOPS for the cluster, such that an indication of how heavily a particular client is using the cluster can he determined. Using this information, the target performance manager can calculate another factor than can he used to scale the target performance value based upon how much a client is using the system compared to all other clients.



FIG. 6 depicts a simplified flowchart of a method 600 of performing performance management that matches an overloaded system metric with a client metric in accordance with one illustrative implementation. Additional, fewer, or different operations of the method 600 may he performed, depending on the particular embodiment. The method 600 can be implemented on a computing device. In one implementation, the method 600 is encoded on a computer-readable medium that contains instructions that, when executed by a computing device, cause the computing device to perform operations of the method 600.


In an operation 602, client metrics can be determined. For example, a performance manager 114 can determine client metrics, as described above, for a preceding period of time, e.g., 100 ms, 1 s, 10 s, etc. In an operation 604, system metrics can be determined. For example, the performance manager 114 or another process can determine system metrics as described above. In one implementation, the client metrics and/or system metrics are used to calculate one or more load values. In an operation 606, the target performance manager 402 can then determine if the system is overloaded in way based upon various load values. For example, the target performance manager 402 can determine if a system is overloaded by comparing system load values with corresponding thresholds. Any load value above its corresponding threshold indicates an overload condition. In one implementation, the system load values are analyzed in a prioritized order and the first overloaded load value is used to determine how to throttle clients.


In an operation 608, one or more corresponding client metrics associated with the overloaded load value are determined. For example, if the overloaded system load is the number of read operations, the client's number of read operations can be used as the associated client metric. The client's metric does not have to be the same as the overloaded system metric. As another example, if the overloaded system load is read latency, the corresponding client metrics can be the ratio of read to write IO operations and the total number of read operations for a client. In an operation 610, a client-specific factor is determined based upon the client metric associated with the overloaded system load value. In the first example above, the factor can be the number of the client's read operations divided by the total number of read operations of the cluster. The client factor, therefore, would be relative to how much the client is contributing to the system load value. Clients that were dong a relatively larger number of reads would have a greater client metric compared with a client that was doing a relatively smaller number of reads.


In an operation 612, the client-specific factor is used to calculate the target performance value for the client. In one implementation, an initial target performance value can be calculated and then multiplied by the client specific factor. In another implementation, a cluster reduction value is determined and this value is multiplied by the client specific factor. Continuing the example above, the cluster reduction value can be the number of read IOPS that should be throttled. Compared to throttling each client equally based upon the cluster reduction value, using the client-specific factor results in the same number of read IOPS that are throttled, but clients who have a large number of read IO operations are throttled more than clients who have a smaller number of read IO operations. Using client-specific factors helps the target performance manager 402 control the throttling of clients to help ensure that the throttling is effective. For example, if client-specific factors were not used and throttling was applied equally across all clients, a client whose use of the system was not contributing to the system's overloading would be unnecessarily throttled. Worse, the throttling of all of the clients might not be as effective since the throttling of clients who did not need to be throttled would not help ease the overloading condition, which could result in even more throttling being applied to clients.


In an operation 614, the performance manager 114 can adjust the performance of client 108. For example, the client's use of the system can be throttled as described above.


Using the above system, clients 108 may be offered performance guarantees based on performance metrics, such as IOPS. For example, given that system 100 can process a total number of IOPS, the total number may be divided among different clients 108 in terms of a number of IOPS within the total amount. The IOPS are allocated using the min, max, and burst. If it is more than the total then possible, the administrator is notified that too many IOPS are being guaranteed and instructed to either add more performance capacity or change the IOP guarantees. This notification may be before a capacity threshold is reached (e.g., full capacity or a pre-defined threshold below full capacity). The notification can be sent before the capacity is reached because client performance is characterized in terms of IOPS and the administrator can be alerted that performance is overprovisioned by N number of IOPS. For example, clients 108 may be guaranteed to be operating between a minimum and maximum number of IOPS over time (with bursts above the maximum at certain times). Performance manager 114 can guarantee performance within these QoS parameters using the above system. Because load is evenly distributed, hot spots will not occur and system 100 may operate around the total amount of IOPS regularly. Thus, without hot spot problems and with system 100 being able to provide the total amount of IOPS regularly, performance may be guaranteed for clients 108 as the number of IOPS performed by clients 108 are adjusted within the total to make sure each client is operating within the QoS parameters for each given client 108. Since each client's effect on a global pool of performance is measured and predictable, the administrator can consider the entire cluster's performance as a pool of performance as opposed to individual nodes, each with its own performance limits. This feature allows the cluster to accurately characterize its performance and guarantee its ability to deliver performance among all of its volumes.


Accordingly, performance management is provided based on the distributed data architecture. Because data is evenly distributed across all drives in the cluster, the load of each individual volume is also equal across every single drive in storage system 100. This feature may remove hot spots and allow performance management to be accurate and fairly provisioned and to guarantee an entire cluster performance for individual volumes.


Independent Control of Write IOPS and Read IOPS


In some systems, read and write IOPS are combined and can be controlled at a system level. For example, a storage system can throttle the number of IOPS that are run within a given time period. These systems, therefore, assume that a read and a write IOP are similar in regards to the cost to perform each IOP. Generally, a write IOP is more expensive to complete compared to a read IOP. For example, a write IOP can include both reading and writing metadata compared to a read IOP that can avoid any metadata writes. Treating write IOPS separately from read IOPS at a system level allows for a system to fine tune system performance In one example, an overloaded system can throttle writes to help reduce the overloading but allow reads to continue without throttling. This can allow for users of the storage system who are doing mostly reads to see less slowdown compared to a system that would throttle read and write IOPS together.



FIG. 7A illustrates an example allocation of IOPS 700 to a client over a period of time t. In particular, the FIG. 7A shows an allocation of total IOPS 702, read IOPS 704, and write IOPS 706 to a client over a time period t. The client, such as a client 108 shown in FIGS. 1A and 1B, is allocated 100 total IOPS 702. Of the allocated 100 IOPS, the client is allocated a maximum of 75 read IOPS 704 and a maximum of 50 write IOPS 706. That is, even though the client is allocated 100 total IOPS, the IOPS cannot exceed 50 write IOPS and cannot exceed 75 read IOPS. The client can request a combination of a number of write IOPS and a number of read IOPS that satisfy the allocation shown in FIG. 7A. It is understood that the number of total IOPS 702, the read IOPS 704, and the write IOPS 706 can have values different than the ones shown in FIG. 7A, however, the number of read IOPS 704 and the number of write IOPS 706 may not exceed the total number of IOPS 702. For example, in one or more embodiments, the number of read IOPS 704 can be equal to the total number of IOPS 702. As noted above, write IOPS are typically more expensive, in terms of system resources, than read IOPS. Reading and writing of metadata, write access times, etc. are some reasons why writes are more expensive compared to reads. Thus, the system may allow the user to request as many read IOPS as the total IOPS allocated, but limit the number of write IOPS to a number that is substantially smaller than the total IOPS.


The allocation of IOPS 700 is defined over a time period t. The time period t can represent the duration of time over which the QoS parameter for the client is defined. For example, without limitation, the time period t can be equal to about 10 ms to about 1 s.


In one or more embodiments, the allocation of total IOPS 702 shown in FIG. 7A can represent client maximum QoS parameters, such as the maximum IOPS discussed above in relation to FIG. 2. An administrator, for example, can use the user interface 200 shown in FIG. 2 to adjust the total IOPS by adjusting the maximum QoS value using sliders 202. In one or more embodiments, the user interface 200 can also display maximum read IOPS and maximum write IOPS values to the user, and provide the ability to change these values. For example, the user interface can include two additional columns one each for maximum read IOPS and maximum write IOPS. The columns can include rows of IOPS corresponding to each of the various IO sizes, such as 4 KB, 8 KB, 16 KB, and 256 KB. The user interface 200 can additionally include sliders 202 or text boxes to allow setting and changing the values of the total IOPS, read IOPS and write IOPS, in a manner similar to that discussed above in relation to altering the values of minimum IOPS, maximum IOPS, and burst IOPS in FIG. 2.


In one or more embodiments, the allocation of IOPS 700 can be determined by a performance manager, such as the performance manager 114 discussed above in relation to FIG. 1B. In particular, the performance manager 114 can determine the allocation of IOPS 700 based on the result of determining target performance values for the client 108. As discussed above, the performance manager 114 can determine target performance values associated with a client 108 based at least on a load value and client quality of service parameters. The load value can quantify the health of a cluster of storage drives 116 accessed by the clients 108. For example, the load value can refer to the total number of IOPS a cluster of storage drives 116 is receiving and/or processing from one or more clients 108. The client quality of service parameters can refer to the bandwidth allocated to the client 108. In one or more embodiments, the client quality of service parameters can refer to the number of IOPS per unit time promised to a client 108. In one or more embodiments, the performance manager 114 can compare the load value to the client's quality of service parameters, and adjust the target performance values, such as the total IOPS, read IOPS and the write IOPS, such that the load value associated with the cluster of storage drives 116 is maintained at acceptable levels.


In one or more embodiments, the allocation of IOPS 700 shown in FIG. 7A can change over time based on the system and client metrics received by the performance manager 114. For example, if the performance manager 114 determines that it has oversubscribed the available IOPS on the system, the performance manager 114 may dynamically alter the allocation of IOPS such that the available IOPS are appropriately distributed among clients 108. In one or more embodiments, each client 108 can have a different allocation of IOPS based on the quality of service agreement with that client.



FIG. 7B shows an example flow diagram of a process 730 for independent control of read IOPS and write IOPS associated with a client. In particular, the process 730 can be executed by the performance manager 114 discussed above in relation to FIG. 1A. In one or more embodiments, the target performance manager 402 shown in FIG. 4 can be used to execute the process 730 to generate a set of client target performance values. In some such embodiments, the target performance values generated by the target performance manager 402 can be directly fed to the performance controller 406 without being modified by the PID controller block 404. FIG. 7C shows example target IOPS 750 associated with a client over a time period t. In particular, the target IOPS 750 illustrate a result of the execution of the process 730 shown in FIG. 7B, which result includes target total IOPS 902, target read IOPS 904, and target write IOPS 906. The process 730 shown in FIG. 7B is discussed in detail below in conjunction with the allocated IOPS 700 shown in FIG. 7A and the target IOPS 750 shown in FIG. 7C.


In some embodiments, after the client target performance values are calculated for a client, the process 730 includes, for a first time period, allocating write IOPS, read IOPS, and total IOPS for a client (operation 732). As discussed above in relation to FIG. 7A, the client can be allocated a given number of total IOPS, read IOPS and write IOPS. Specifically, FIG. 7A shows an example in which the client is allocated 100 total IOPS, 75 read IOPS, and 50 write IOPS.


The process 730 also includes receiving write IOPS request from the client for the first time period (operation 734). As discussed above in relation to FIGS. 1A and 1B, a client 108 can request writing to client data stored in the storage 116. Typically, the client 108 will issue a write request to the metadata server 110, where the request can include the data to be written along with addressing information including, for example, the file name, the object name, or the block number, associated with client data stored in the storage 116. The metadata server 110 can translate the client addressing information into block layer addressing that can be understood by the block server 112 associated with the storage 116. The metadata server 110 also can determine the number of IOPS needed to write the client data into the storage. The number of IOPS can be determined based on, for example, the size of the IOPS handled by the block server 112. In one or more embodiments, the number of IOPS for a given client data to be written in the storage 116 is an inverse function of the size of the IOPS handled by the block server 112. The example in FIG. 9 assumes that the number of requested write IOPS corresponding to the requested client data write is equal to 100 write IOPS. It is understood that number of write IOPS can be different in other cases.


The process 730 further includes determining the lesser of the requested write IOPS and the allocated write IOPS (operation 736). This operation compares the number of requested IOPS to the number of allocated write IOPS. For example, as shown in FIG. 7C, the number of requested write IOPS (100) is greater than the number of allocated write IOPS (50). Thus, the allocated write IOPS is the lesser of the number of requested write IOPS and the number of allocated write IOPS. Further, the target write IOPS 756 is equated to the lesser of the requested write IOPS and the allocated write IOPS, that is, the allocated write IOPS (50). Of course, if the requested write IOPS were less than the allocated write IOPS (for example, if the requested number of write IOPS were 40 and the allocated write IOPS were 50), then the number of requested write IOPS would be the lesser of the number of requested write IOPS and the number of allocated write IOPS. Therefore, the number of target write IOPS 756 would be set to the number of requested write IOPS (40). It is noted that while the number of requested write IOPS of 100 is equal to the total number of allocated IOPS of 100, the allocation of only 50 write IOPS precludes the client from receiving more than 50 write IOPS for that particular time period t.


The process 730 also includes subtracting the lesser of the requested write IOPS and the allocated write IOPS from the allocated total IOPS (operation 738). In other words, the number of target write IOPS are subtracted from the number of allocated total IOPS. As shown in FIG. 7C, the 50 target write IOPS 756 are subtracted from the 100 allocated total IOPS to result in the target total IOPS 752 of 50. The target total IOPS 752 is indicated by the shaded region between 100 and 50.


The process further includes determining a number of deferred write IOPS (operation 740). The number of deferred write IOPS is determined by determining the difference between the number of requested write IOPS and the number of target write IOPS. For example, referring to FIG. 7C, the number of target write IOPS is indicated by the first shaded region 756. Subtracting the number of requested write IOPS (100) from the target write IOPS (50) 756 results in a negative number: −50. The absolute (50) of the resulting number (−50) indicates the number of deferred write IOPS 758. These deferred write IOPS indicate the number of write IOPS that are to be handled in a subsequent time period (operation 742). For example, the deferred write IOPS can be executed in a subsequent time period in a manner similar to that discussed above in relation to the requested write IOPS. Thus, if the number of allocated write IOPS in a subsequent time period is same (50) as that shown in FIG. 7A, then the deferred write IOPS can be executed in that subsequent time period. The management processor 114 may also take into account newly requested write IOPS during that subsequent time period to determine the target total and write IOPS.


It is noted that if the number of requested write IOPS is less than the number of allocated write IOPS, then there will be no deferred write IOPS. For example, if the number of number of requested write IOPS were 40, instead of 100, then the number of target write IOPS would be equal to 40, which can be accommodated by the number (50) of allocated write IOPS.


The resulting number of target total IOPS (50) leaves an additional number (50) of allocated IOPS that can be utilized by any read IOPS requested by the client. Thus, if the client 108 requests 50 read IOPS during the time period t, because the requested 50 read IOPS can be accommodated by both the allocated read IOPS (75) and the remaining allocated total IOPS (50), the requested read IOPS can be executed. It is noted that in some embodiments, the performance manager 114 may be configured to execute the requested write IOPS even if the number of requested write IOPS is greater than the number of allocated write IOPS as long as the number is less than or equal to the number of allocated total IOPS. For example, referring again to the above discussed example of receiving 100 requested write IOPS, as this number is equal to the 100 allocated total IOPS, the management processor 114 may set the target write IOPS to the requested write IOPS. In such instances, the number of allocated total IOPS would be completely exhausted for the time period t. As a result, any read IOPS requested by the client during the time period t would be denied by the management processor 114. The process 730 discussed above allows the management processor 114 to leave room for execution of read IOPS irrespective of the number of requested write IOPS.


The independent control of reads and writes can also be used when a storage system reaches a certain amount of free space. When the storage system reaches a critical level of free space, e.g., 5%, 10%, 12% of free system space and/or drive space, the system can throttle all client writes by lowering the write IOPS for the system. Such action will slow down the number of writes to the system that will further decrease the amount of free space. This allows for data to be moved off of the cluster and/or drive. In one embodiment, when this condition occurs a new cluster for one or more volumes is selected. The data from the one or more volumes are moved from the current cluster/drive and moved to the new cluster. To help facilitate this, the read IOPS of the original system can be set high to ensure that reads of the data are allowed and the write IOPS of the new cluster can be set to high at both the system level and the client level to ensure that the writes to the new cluster are not throttled. Once the data is moved to the new system, the data be deleted from the original system to free up space. In some embodiments, deletion of data can be either a write operation or considered a metadata only operation.


In various embodiments, metadata IO can be controlled independently of data 10. In one or more embodiments, the process 730 discussed above can be carried out in conjunction with the system level performance management, as discussed above in relation to the method 600 shown in FIG. 6. In particular, the performance manager 114 can throttle the allocation of the total number of IOPS based on the current performance of the system, but allocate the read IOPS 704 and write IOPS 706 independently of the allocation of the total number of IOPS. In this manner, the performance manager 114 can throttle the allocation of the total number of IOPS 702 based on the system performance, as discussed in the method shown in FIG. 6, and in addition determine the target write IOPS 756 and deferred IOPS 758 based on the process 730 shown in FIG. 7B. In some other embodiments, the total IOPS, the read IOPS, and the write IOPS may all be throttled independently of each other based on the current performance of the system.


While the allocation of IOPS 700 shown in FIG. 7A is associated with the data associated with the client, the performance manager 114 also can maintain an allocation of IOPS for metadata access IO, including both reads and writes, or separate allocations for metadata reads and metadata writes. As discussed above in relation to FIGS. 1A and 1B, the metadata layer 104 can include metadata servers 110 or in some embodiments can include volume servers 114. The metadata servers 110 and the volume servers 114 translate the addressing scheme used by the client 108 into corresponding block server layer addressing scheme associated with the block server layer 106. The metadata severs 110 and the volume servers 114 can maintain mapping information that map the client addressing scheme to the block layer addressing scheme. The mapping information is typically stored as metadata in the metadata servers 110 or the volume servers 114. In one or more embodiments, the metadata can be stored in centralized storage, which can be accessed by one or more metadata severs 110 or volume servers 114 within the metadata layer 104. Accessing the metadata can include reading and writing. In one or more embodiments, the size of the metadata, and the number of IOPS associated with the read/write transactions between the metadata server 110 and metadata storage can be very large. In one or more such embodiments, the performance of the system can he impacted by not only the read and write IOPS associated with client data, but also due to the reads and write IOPS associated with the metadata.


In one or more embodiments, total, read, and write IOPS can be allocated to the metadata servers 110 or volume servers 114 for accessing metadata in a manner similar to that discussed above in relation to accessing client data. For example, a metadata performance manager, similar to the performance manager 114, can be used to manage system performance based on the metadata IOPS. The metadata performance manager can allocate total IOPS, read IOPS, and write IOPS for a metadata server 110 or a volume server 114. The metadata performance server can receive requested read and write IOPS from the metadata server 110 or the volume server 114 just as the performance manager 114 received read and write IOPS requests from clients. In a manner similar to the process 730 discussed above in relation to the performance manager 114, the metadata performance manager also can determine target read IOPS, target write IOPS and target total IOPS associated with metadata access. For example, the metadata performance manager can limit the number of target metadata write IOPS from exceeding the total metadata IOPS within a time period such that there are at least some room for executing any received metadata read IOPS during the same time period. The metadata performance manager can further determine deferred metadata write IOPS in cases where the requested metadata write IOPS exceeds the allocated metadata write IOPS in a manner similar to that discussed above in relation to client data write IOPS in FIGS. 7A-7C.


In one or more embodiments, the metadata performance manager can dynamically change the allocated total, read, and write IOPS based on the system performance. In one or more embodiments, the metadata performance manager can receive system metrics as well as metadata metrics (such as, for example, metadata read/write IOPS, metadata storage capacity, or bandwidth) as inputs to determine the current load of the system and based on the determined current load and the system reallocate the total, read, and write IOPS. As one example, metadata access can be increased during certain system level operations, such as moving one or more volumes to another cluster or storage device.


Load Value Calculations


Load values can be used to determine if a client should be throttled to help ensure QoS among all clients. Various load values can be calculated based upon one or more system metric and/or client metric. As an example, a load value can be calculated that corresponds to a client's data read latency. When calculating a load value that corresponds with a client, how the client's data is managed on the storage system becomes important.



FIG. 9 shows a portion of a storage system m accordance with one illustrative implementation. In the specific example embodiment of FIG. 9, the storage system is shown to include a cluster 910 of nodes (912, 914, 916, and 918). According to different embodiments, each node may include one or more storage devices such as, for example, one or more solid state drives (SSDs). In the example embodiment of FIG. 9, it is assumed for purposes of illustration that three different clients (e.g., Client A 902, Client B 904, and Client C 906) are each actively engaged in the reading/writing of data from/to storage cluster 910.


Additionally, as illustrated in the example embodiment of FIG. 9, each node may have associated therewith one or more services (e.g., Services A-H), wherein each service may be configured or designed to handle a particular set of functions and/or tasks. For example, as illustrated in the example embodiment of FIG. 9: Services A and B may be associated with (and/or may be handled by) Node 1 (912); Services C and D may be associated with (and/or may be handled by) Node 2 (914); Service E may be associated with (and/or may be handled by) Node 3 (916); Services F, G, H may be associated with (and/or may be handled by) Node 4 (918). In at least one embodiment, one or more of the services may be configured or designed to implement a slice server. A slice server can also be described as providing slice service functionality.


Additionally, according to different embodiments, a given service may have associated therewith at least one primary role and further may have associated therewith one or more secondary roles. For example, in the example embodiment of FIG. 9, it is assumed that Service A has been configured or designed to include at least the following functionality: (1) a primary role of Service A functions as the primary slice service for Client A, and (2) a secondary role of Service A handles the data/metadata replication tasks (e.g., slice service replication tasks) relating to Client A, which, in this example involves replicating Client A's write requests (and/or other slice-related metadata for Client A) to Service C. Thus, for example, in one embodiment, write requests initiated from Client A may be received at Service A 902a, and in response, Service A may perform and/or initiate one or more of the following operations (or combinations thereof):

    • process the write request at Service A's slice server, which, for example, may include generating and storing related metadata at Service A's slice server;
    • (if needed) cause the data (of the write request) to be saved in a first location of block storage (e.g., managed by Service A);
    • forward (902b) the write request (and/or associated data/metadata) to Service C for replication.


In at least one embodiment, when Service C receives a copy of the Client A write request, it may respond by processing the write request at Service C's slice server, and (if needed) causing the data (of the write request) to be saved in a second location of block storage (e.g., managed by Service C) for replication or redundancy purposes. In at least one embodiment, the first and second locations of block storage may each reside at different physical nodes. Similarly Service A's slice server and Service C's slice server may each be implemented at different physical nodes.


Accordingly, in the example embodiment of FIG. 9, the processing of a Client A write request may involve two distinct block storage write operations-one initiated by Service A (the primary Service) and another initiated by Service C (the redundant Service). On the other hand, the processing of a Client A read request may only be handled by Service A (e.g., under normal conditions) since Service A is without involving Service C) since Service A is able to handle the read request without necessarily involving Service C.


For purposes of illustration, in the example embodiment of FIG. 9, it is also assumed that Service E has been configured or designed to include at least the following functionality: (1) a primary role of Service E functions as the primary slice service for Client B, and (2) a secondary role of Service E handles the data and/or metadata replication tasks (e.g., slice service replication tasks) relating to Client B, which, in this example involves replicating Client B's write requests (and/or other Slice-related metadata for Client B) to Service D. Thus, for example, in one embodiment, write requests initiated from Client B may be received at Service E 904a, and in response, Service E may perform and/or initiate one or more of the following operations (or combinations thereof):

    • process the write request at Service E's slice server. which, for example, may include generating and storing related metadata at Service E″s slice server;
    • (if needed) cause the data (of the write request) to be saved in a first location of block storage (e.g., managed by Service E);
    • forward (904b) the write request (and/or associated data/metadata) to Service D for replication.


In at least one embodiment, when Service D receives a copy of the Client B write request, it may respond by processing the write request at Service D's slice server, and (if needed) causing the data (of the write request) to be saved in a second location of block storage (e.g., managed by Service D) for replication or redundancy purposes. In at least one embodiment, the first and second locations of block storage may each reside at different physical nodes. Similarly Service E's slice server and Service D's slice server may each be implemented at different physical nodes.


According to different embodiments, it is also possible to implement multiple replication (e.g., where the data/metadata is replicated at two or more other locations within the storage system/cluster). For example, as illustrated in the example embodiment of FIG. 9, it is assumed that Service E has been configured or designed to include at least the following functionality: (1) a primary role of Service E functions as the primary slice service for Client C, (2) a secondary role of Service E handles the data and/or metadata replication tasks (e.g., slice service replication tasks) relating to Client C, which, in this example involves replicating Client C's write requests (and/or other Slice-related metadata for Client C) to Service C; and (3) a secondary role of Service E handles the data and/or metadata replication tasks (e.g., slice service replication tasks) relating to Client C, which, in this example involves replicating Client C's write requests (and/or other Slice-related metadata for Client C) to Service G. Thus, for example, in one embodiment, write requests initiated from Client C may be received at Service E 906a, and in response, Service E may perform and/or initiate one or more of the following operations (or combinations thereof):

    • process the write request at Service E's slice server, which, for example, may include generating and storing related metadata at Service E's slice server;
    • (if needed) cause the data (of the write request) to be saved in a first location of block storage (e.g., managed by Service E);
    • forward (906b) the write request (and/or associated data/metadata) to Service C for replication;
    • forward (906c) the write request (and/or associated data/metadata) to Service G for replication.


In at least one embodiment, when Service C receives a copy of the Client C write request, it may respond by processing the write request at Service C's slice server, and (if needed) causing the data (of the write request) to be saved in a second location of block storage (e.g., managed by Service C) for replication or redundancy purposes. Similarly, In at least one embodiment, when Service G receives a copy of the Client C write request, it may respond by processing the write request at Service G's slice server, and (if needed) causing the data (of the write request) to be saved in a third location of block storage (e.g., managed by Service G) for replication or redundancy purposes.


One or more flow diagrams have been used herein. The use of flow diagrams is not meant to be limiting with respect to the order of operations performed. The herein-described subject matter sometimes illustrates different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected,” or “operably coupled,” to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being “operably couplable” to each other to achieve the desired functionality. Specific examples of operably couplable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components.


With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.


It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including but not limited to;” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to inventions containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should typically be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, typically means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to “at least one of A, B and C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, and C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, Rand C together, etc.). In those instances where a convention analogous to “at least one of A, B, or C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, or C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.”


The foregoing description of illustrative implementations has been presented for purposes of illustration and of description. It is not intended to be exhaustive or limiting with respect to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of the disclosed implementations. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents.

Claims
  • 1. A method comprising: receiving, by a storage system, a request from a client to write data to the storage system;estimating, by the storage system, based on the request, a requested write quality of service (QoS) parameter for the client for storing the data by the storage system during a first time period;determining, by the storage system, a target write QoS parameter for the client based on a system metric associated with the storage system reflecting usage of the storage system, the estimated requested write QoS parameter and an allocated write QoS parameter for the client; andindependently regulating, by the storage system, read performance and write performance of the client using a controller, to adjust the write performance toward the determined target write QoS parameter within the first time period based on feedback regarding a performance metric indicative of a current write performance by the client.
  • 2. The method of claim 1, wherein said independently regulating does not increase a latency of processing a request from the client to read data from the storage system during the first time period.
  • 3. The method of claim 1, wherein the requested write QoS parameter, the target write QoS parameter, and the allocated write QoS parameter comprise values expressed in terms of input/output operations per second (IOPS), bandwidth, or latency.
  • 4. The method of claim 3, wherein the read performance and the write performance comprise values expressed in terms of IOPS, bandwidth, or latency.
  • 5. The method of claim 1, wherein regulation of the write performance of the client is implemented by locking out access by the client to one or more volumes associated with the storage system.
  • 6. The method of claim 1, wherein the system metric comprises space utilization and the method further comprises determining the allocated write QoS parameter for the client based at least in part on the space utilization.
  • 7. A storage system comprising: a processor; anda non-transitory computer-readable media, coupled to the processor storing instructions that when executed by the processor cause the storage system to:receive a request from a client to write data to the storage system;estimate, based on the request, a requested write quality of service (QoS) parameter for the client for storing the data by the storage system during a first time period;determine a target write QoS parameter for the client based on a system metric associated with the storage system reflecting usage of the storage system, the estimated requested write QoS parameter and an allocated write QoS parameter for the client; andindependently regulate read performance and write performance of the client using a controller to adjust the write performance toward the determined target write QoS parameter within the first time period based on feedback regarding a performance metric indicative of a current write performance by the client.
  • 8. The storage system of claim 7, wherein independent regulation of the read performance and the write performance of the client does not increase a latency of processing a request from the client to read data from the storage system during the first time period.
  • 9. The storage system of claim 7, wherein the requested write QoS parameter, the target write QoS parameter, and the allocated write QoS parameter comprise values expressed in terms of input/output operations per second (IOPS), bandwidth, or latency.
  • 10. The storage system of claim 9, wherein the read performance and the write performance comprise values expressed in terms of IOPS, bandwidth, or latency.
  • 11. The storage system of claim 7, wherein regulation of the write performance of the client is implemented by locking out access by the client to one or more volumes associated with the storage system.
  • 12. The storage system of claim 7, wherein the system metric comprises space utilization and wherein the instructions further cause the storage system to determine the allocated write QoS parameter for the client based at least in part on the space utilization.
  • 13. The storage system of claim 7, wherein the controller comprises a proportion-integral-derivative controller.
  • 14. A non-transitory computer readable medium containing executable program instructions that when executed by a processor of a storage system cause the storage system to: receive a request from a client to write data to the storage system;estimate, based on the request, a requested write quality of service (QoS) parameter for the client for storing the data by the storage system during a first time period;determine a target write QoS parameter for the client based on a system metric associated with the storage system reflecting usage of the storage system, the estimated requested write QoS parameter and an allocated write QoS parameter for the client; andindependently regulate read performance and write performance of the client using a controller to adjust the write performance toward the determined target write QoS parameter within the first time period based on feedback regarding a performance metric indicative of a current write performance by the client.
  • 15. The non-transitory computer readable medium of claim 14, wherein independent regulation of the read performance and the write performance of the client does not increase a latency of processing a request from the client to read data from the storage system during the first time period.
  • 16. The non-transitory computer readable medium of claim 14, wherein the requested write QoS parameter, the target write QoS parameter, and the allocated write QoS parameter comprise values expressed in terms of input/output operations per second (IOPS), bandwidth, or latency.
  • 17. The non-transitory computer readable medium of claim 16, wherein the read performance and the write performance comprise values expressed in terms of IOPS, bandwidth, or latency.
  • 18. The non-transitory computer readable medium of claim 14, wherein regulation of the write performance of the client is implemented by locking out access by the client to one or more volumes associated with the storage system.
  • 19. The non-transitory computer readable medium of claim 14, wherein the system metric comprises space utilization and wherein the instructions further cause the storage system to determine the allocated write QoS parameter for the client based at least in part on the space utilization.
  • 20. The non-transitory computer readable medium of claim 14, wherein the controller comprises a proportion-integral-derivative controller for each of IOPS, bandwidth, and latency.
CROSS-REFERENCE TO RELATED PATENTS

This application is a continuation of U.S. patent application Ser. No. 17/203,094, filed on Mar. 16, 2021, which is a continuation of U.S. patent application Ser. No. 16/867,418, filed on May 5, 2020, now U.S. Pat. No. 10,997,098, which is a continuation of U.S. patent application Ser. No. 15/270,973, filed on Sep. 20, 2016, now U.S. Pat. No. 10,642,763, all of which are hereby incorporated by reference in their entirety for all purposes.

US Referenced Citations (875)
Number Name Date Kind
5138697 Yamamoto et al. Aug 1992 A
5375216 Moyer et al. Dec 1994 A
5459857 Ludlam et al. Oct 1995 A
5511190 Sharma et al. Apr 1996 A
5542089 Lindsay et al. Jul 1996 A
5592432 Vishlitzky et al. Jan 1997 A
5603001 Sukegawa et al. Feb 1997 A
5611073 Malpure et al. Mar 1997 A
5734859 Yorimitsu et al. Mar 1998 A
5734898 He Mar 1998 A
5751993 Ofek et al. May 1998 A
5860082 Smith et al. Jan 1999 A
5864698 Krau et al. Jan 1999 A
5890161 Helland et al. Mar 1999 A
5937425 Ban Aug 1999 A
5974421 Krishnaswamy et al. Oct 1999 A
5991862 Ruane Nov 1999 A
6047283 Braun et al. Apr 2000 A
6067541 Raju et al. May 2000 A
6081900 Subramaniam et al. Jun 2000 A
6219800 Johnson et al. Apr 2001 B1
6257756 Zarubinsky et al. Jul 2001 B1
6275898 Dekoning Aug 2001 B1
6347337 Shah et al. Feb 2002 B1
6363385 Kedem et al. Mar 2002 B1
6385699 Bozman et al. May 2002 B1
6397307 Ohran May 2002 B2
6434555 Frolund et al. Aug 2002 B1
6434662 Greene et al. Aug 2002 B1
6526478 Kirby Feb 2003 B1
6553384 Frey et al. Apr 2003 B1
6560196 Wei May 2003 B1
6567817 Vanleer et al. May 2003 B1
6578158 Deitz et al. Jun 2003 B1
6604155 Chong, Jr. Aug 2003 B1
6609176 Mizuno Aug 2003 B1
6640312 Thomson et al. Oct 2003 B1
6681389 Engel et al. Jan 2004 B1
6704839 Butterworth et al. Mar 2004 B2
6728843 Pong et al. Apr 2004 B1
6741698 Jensen May 2004 B1
6779003 Midgley et al. Aug 2004 B1
6795890 Sugai et al. Sep 2004 B1
6895500 Rothberg May 2005 B1
6904470 Ofer et al. Jun 2005 B1
6912645 Dorward et al. Jun 2005 B2
6917898 Kirubalaratnam et al. Jul 2005 B1
6928521 Burton et al. Aug 2005 B1
6928526 Zhu et al. Aug 2005 B1
6961865 Ganesh et al. Nov 2005 B1
7003565 Hind et al. Feb 2006 B2
7028218 Schwarm et al. Apr 2006 B2
7039694 Kampe et al. May 2006 B2
7047358 Lee et al. May 2006 B2
7055058 Lee et al. May 2006 B2
7065619 Zhu et al. Jun 2006 B1
7093086 Van Rietschote Aug 2006 B1
7110913 Monroe et al. Sep 2006 B2
7152142 Guha et al. Dec 2006 B1
7167951 Blades et al. Jan 2007 B2
7174379 Agarwal et al. Feb 2007 B2
7177853 Ezra et al. Feb 2007 B1
7188149 Kishimoto et al. Mar 2007 B2
7191357 Holland et al. Mar 2007 B2
7219260 De Forest et al. May 2007 B1
7249150 Watanabe et al. Jul 2007 B1
7251663 Smith Jul 2007 B1
7257690 Baird Aug 2007 B1
7305579 Williams Dec 2007 B2
7325059 Barach et al. Jan 2008 B2
7334094 Fair Feb 2008 B2
7334095 Fair et al. Feb 2008 B1
7366865 Lakshmanamurthy et al. Apr 2008 B2
7370048 Loeb May 2008 B2
7373345 Carpentier et al. May 2008 B2
7394944 Boskovic et al. Jul 2008 B2
7395283 Atzmony et al. Jul 2008 B1
7395352 Lam et al. Jul 2008 B1
7415653 Bonwick et al. Aug 2008 B1
7451167 Bali et al. Nov 2008 B2
7454592 Shah et al. Nov 2008 B1
7457864 Chambliss et al. Nov 2008 B2
7464125 Orszag et al. Dec 2008 B1
7519725 Alvarez et al. Apr 2009 B2
7526685 Maso et al. Apr 2009 B2
7529780 Braginsky et al. May 2009 B1
7529830 Fujii May 2009 B2
7543100 Singhal et al. Jun 2009 B2
7543178 McNeill et al. Jun 2009 B2
7562101 Jernigan, IV et al. Jul 2009 B1
7562203 Scott et al. Jul 2009 B2
7603391 Federwisch et al. Oct 2009 B1
7603529 Machardy et al. Oct 2009 B1
7624112 Ganesh et al. Nov 2009 B2
7644087 Barkai et al. Jan 2010 B2
7650476 Ashour et al. Jan 2010 B2
7668885 Wittke et al. Feb 2010 B2
7680837 Yamato Mar 2010 B2
7681076 Sarma Mar 2010 B1
7689716 Short et al. Mar 2010 B2
7701948 Rabje et al. Apr 2010 B2
7730153 Gole et al. Jun 2010 B1
7739614 Hackworth Jun 2010 B1
7743035 Chen et al. Jun 2010 B2
7757056 Fair Jul 2010 B1
7797279 Starling et al. Sep 2010 B1
7805266 Dasu et al. Sep 2010 B1
7805583 Todd et al. Sep 2010 B1
7814064 Vingralek Oct 2010 B2
7817562 Kemeny Oct 2010 B1
7818525 Frost et al. Oct 2010 B1
7831736 Thompson Nov 2010 B1
7831769 Wen et al. Nov 2010 B1
7849098 Scales et al. Dec 2010 B1
7849281 Malhotra et al. Dec 2010 B2
7873619 Faibish et al. Jan 2011 B1
7899791 Gole Mar 2011 B1
7917726 Hummel et al. Mar 2011 B2
7921169 Jacobs et al. Apr 2011 B2
7921325 Kondo et al. Apr 2011 B2
7949693 Mason et al. May 2011 B1
7953878 Trimble May 2011 B1
7962709 Agrawal Jun 2011 B2
7987167 Kazar et al. Jul 2011 B1
7996636 Prakash et al. Aug 2011 B1
8055745 Atluri Nov 2011 B2
8060797 Hida et al. Nov 2011 B2
8074019 Gupta et al. Dec 2011 B2
8078918 Diggs et al. Dec 2011 B2
8082390 Fan et al. Dec 2011 B1
8086585 Brashers et al. Dec 2011 B1
8089969 Rabie et al. Jan 2012 B2
8090908 Bolen et al. Jan 2012 B1
8099396 Novick et al. Jan 2012 B1
8099554 Solomon et al. Jan 2012 B1
8122213 Cherian et al. Feb 2012 B2
8127182 Sivaperuman et al. Feb 2012 B2
8131926 Lubbers et al. Mar 2012 B2
8140821 Raizen et al. Mar 2012 B1
8140860 Haswell Mar 2012 B2
8145838 Miller et al. Mar 2012 B1
8156016 Zhang Apr 2012 B2
8156290 Vanninen et al. Apr 2012 B1
8156306 Raizen et al. Apr 2012 B1
8184807 Kato et al. May 2012 B2
8205065 Matze Jun 2012 B2
8209587 Taylor et al. Jun 2012 B1
8214868 Hamilton et al. Jul 2012 B2
8224935 Bandopadhyay et al. Jul 2012 B1
8225135 Barrall et al. Jul 2012 B2
8244978 Kegel et al. Aug 2012 B2
8250116 Mazzagatti et al. Aug 2012 B2
8261085 Fernandez Sep 2012 B1
8312231 Li et al. Nov 2012 B1
8327103 Can et al. Dec 2012 B1
8332357 Chung Dec 2012 B1
8341457 Spry et al. Dec 2012 B2
8369217 Bostica et al. Feb 2013 B2
8417987 Goel et al. Apr 2013 B1
8429096 Soundararajan et al. Apr 2013 B1
8429282 Ahuja et al. Apr 2013 B1
8452929 Bennett May 2013 B2
8463825 Harty et al. Jun 2013 B1
8468368 Gladwin et al. Jun 2013 B2
8484439 Frailong et al. Jul 2013 B1
8489811 Corbett et al. Jul 2013 B1
8495417 Jernigan, IV et al. Jul 2013 B2
8510265 Boone et al. Aug 2013 B1
8515965 Mital et al. Aug 2013 B2
8520855 Kohno et al. Aug 2013 B1
8533410 Corbett et al. Sep 2013 B1
8539008 Faith et al. Sep 2013 B2
8543611 Mirtich et al. Sep 2013 B1
8549154 Colrain et al. Oct 2013 B2
8555019 Montgomery et al. Oct 2013 B2
8560879 Goel Oct 2013 B1
8566508 Borchers et al. Oct 2013 B2
8566617 Clifford Oct 2013 B1
8572091 Sivasubramanian et al. Oct 2013 B1
8577850 Genda et al. Nov 2013 B1
8583865 Sade et al. Nov 2013 B1
8589550 Faibish et al. Nov 2013 B1
8589625 Colgrove et al. Nov 2013 B2
8595434 Northcutt et al. Nov 2013 B2
8595595 Grcanac et al. Nov 2013 B1
8600949 Periyagaram et al. Dec 2013 B2
8626875 Viveganandhan Jan 2014 B2
8645664 Colgrove et al. Feb 2014 B1
8645698 Yi et al. Feb 2014 B2
8671265 Wright Mar 2014 B2
8706692 Luthra et al. Apr 2014 B1
8706701 Stefanov et al. Apr 2014 B1
8712963 Douglis et al. Apr 2014 B1
8732426 Colgrove et al. May 2014 B2
8745338 Yadav et al. Jun 2014 B1
8751763 Ramarao Jun 2014 B1
8762654 Yang et al. Jun 2014 B1
8775868 Colgrove et al. Jul 2014 B2
8782439 Resch Jul 2014 B2
8787580 Hodges et al. Jul 2014 B2
8799571 Desroches et al. Aug 2014 B1
8799601 Chen et al. Aug 2014 B1
8799705 Hallak et al. Aug 2014 B2
8806115 Patel et al. Aug 2014 B1
8806160 Colgrove et al. Aug 2014 B2
8812450 Kesavan et al. Aug 2014 B1
8819208 Wright Aug 2014 B2
8824686 Ishii et al. Sep 2014 B1
8826023 Harmer et al. Sep 2014 B1
8832363 Sundaram et al. Sep 2014 B1
8832373 Colgrove et al. Sep 2014 B2
8843711 Yadav et al. Sep 2014 B1
8849764 Long et al. Sep 2014 B1
8850108 Hayes et al. Sep 2014 B1
8850216 Mikhailov et al. Sep 2014 B1
8855318 Patnala et al. Oct 2014 B1
8856593 Eckhardt et al. Oct 2014 B2
8868868 Maheshwari et al. Oct 2014 B1
8874842 Kimmel et al. Oct 2014 B1
8880787 Kimmel et al. Nov 2014 B1
8880788 Sundaram et al. Nov 2014 B1
8892818 Zheng et al. Nov 2014 B1
8892938 Sundaram et al. Nov 2014 B1
8898388 Kimmel Nov 2014 B1
8904137 Zhang et al. Dec 2014 B1
8904231 Coatney et al. Dec 2014 B2
8922928 Powell Dec 2014 B2
8930778 Cohen Jan 2015 B2
8943032 Xu et al. Jan 2015 B1
8943282 Armangau et al. Jan 2015 B1
8949568 Wei et al. Feb 2015 B2
8977781 Yokoi et al. Mar 2015 B1
8996468 Mattox Mar 2015 B1
8996535 Kimmel et al. Mar 2015 B1
8996790 Segal et al. Mar 2015 B1
8996797 Zheng et al. Mar 2015 B1
9003162 Lomet et al. Apr 2015 B2
9009449 Chou et al. Apr 2015 B2
9021303 Desouter et al. Apr 2015 B1
9026694 Davidson et al. May 2015 B1
9037544 Zheng et al. May 2015 B1
9047211 Wood et al. Jun 2015 B2
9058119 Ray, III et al. Jun 2015 B1
9092142 Nashimoto et al. Jul 2015 B2
9152330 Patel et al. Oct 2015 B2
9152335 Sundaram et al. Oct 2015 B2
9152684 Zheng et al. Oct 2015 B2
9170746 Sundaram et al. Oct 2015 B2
9195939 Goyal et al. Nov 2015 B1
9201742 Bulkowski et al. Dec 2015 B2
9201804 Egyed Dec 2015 B1
9201918 Zheng et al. Dec 2015 B2
9225801 McMullen et al. Dec 2015 B1
9229642 Shu et al. Jan 2016 B2
9251064 Kimmel Feb 2016 B2
9256549 Kimmel et al. Feb 2016 B2
9268502 Zheng et al. Feb 2016 B2
9268653 Kimmel et al. Feb 2016 B2
9274901 Veerla et al. Mar 2016 B2
9286413 Coates et al. Mar 2016 B1
9298417 Muddu et al. Mar 2016 B1
9323701 Shankar Apr 2016 B2
9342444 Minckler et al. May 2016 B2
9348514 Fornander et al. May 2016 B2
9367241 Sundaram et al. Jun 2016 B2
9372789 Minckler et al. Jun 2016 B2
9377953 Fornander et al. Jun 2016 B2
9378043 Zhang et al. Jun 2016 B1
9383933 Wright Jul 2016 B2
9389958 Sundaram et al. Jul 2016 B2
9400609 Randall et al. Jul 2016 B1
9405473 Zheng et al. Aug 2016 B2
9405783 Kimmel et al. Aug 2016 B2
9411620 Wang et al. Aug 2016 B2
9413680 Kusters et al. Aug 2016 B1
9418131 Halevi et al. Aug 2016 B1
9423964 Randall et al. Aug 2016 B1
9438665 Vasanth et al. Sep 2016 B1
9454434 Sundaram et al. Sep 2016 B2
9459856 Curzi et al. Oct 2016 B2
9460009 Taylor et al. Oct 2016 B1
9471248 Zheng et al. Oct 2016 B2
9471680 Elsner et al. Oct 2016 B2
9483349 Sundaram et al. Nov 2016 B2
9507537 Wright Nov 2016 B2
9529546 Sundaram et al. Dec 2016 B2
9537827 McMullen et al. Jan 2017 B1
9563654 Zheng et al. Feb 2017 B2
9572091 Lee et al. Feb 2017 B2
9606874 Moore et al. Mar 2017 B2
9613046 Xu et al. Apr 2017 B1
9619160 Patel et al. Apr 2017 B2
9619351 Sundaram et al. Apr 2017 B2
9639278 Kimmel et al. May 2017 B2
9639293 Guo et al. May 2017 B2
9639546 Gorski et al. May 2017 B1
9652405 Shain et al. May 2017 B1
9671960 Patel et al. Jun 2017 B2
9690703 Jess et al. Jun 2017 B1
9710317 Gupta et al. Jul 2017 B2
9720601 Gupta et al. Aug 2017 B2
9720822 Kimmel Aug 2017 B2
9762460 Pawlowski et al. Sep 2017 B2
9779123 Sen et al. Oct 2017 B2
9785525 Watanabe et al. Oct 2017 B2
9798497 Schick et al. Oct 2017 B1
9798728 Zheng Oct 2017 B2
9817858 Eisenreich et al. Nov 2017 B2
9823857 Pendharkar Nov 2017 B1
9830103 Pundir et al. Nov 2017 B2
9836355 Pundir et al. Dec 2017 B2
9836366 Schatz et al. Dec 2017 B2
9842008 Kimmel et al. Dec 2017 B2
9846642 Choi et al. Dec 2017 B2
9852076 Garg et al. Dec 2017 B1
9934264 Swaminathan et al. Apr 2018 B2
9952767 Zheng et al. Apr 2018 B2
9953351 Sivasubramanian et al. Apr 2018 B1
9954946 Shetty Apr 2018 B2
10013311 Sundaram et al. Jul 2018 B2
10037146 Wright Jul 2018 B2
10042853 Sundaram et al. Aug 2018 B2
10049118 Patel et al. Aug 2018 B2
10108547 Pundir et al. Oct 2018 B2
10133511 Muth et al. Nov 2018 B2
10162686 Kimmel et al. Dec 2018 B2
10185681 Kimmel Jan 2019 B2
10210082 Patel et al. Feb 2019 B2
10216966 McClanahan et al. Feb 2019 B2
10229009 Muth et al. Mar 2019 B2
10235059 Patel et al. Mar 2019 B2
10339132 Zaveri et al. Jul 2019 B2
10360120 Watanabe et al. Jul 2019 B2
10365838 D'Sa et al. Jul 2019 B2
10439900 Wright et al. Oct 2019 B2
10452608 Cantwell et al. Oct 2019 B2
10460124 Wright et al. Oct 2019 B2
10516582 Wright et al. Dec 2019 B2
10530880 McMullen et al. Jan 2020 B2
10565230 Zheng et al. Feb 2020 B2
10642763 Longo et al. May 2020 B2
10664366 Schatz et al. May 2020 B2
10712944 Wright Jul 2020 B2
10762070 Swaminathan et al. Sep 2020 B2
10789134 Zheng et al. Sep 2020 B2
10911328 Wright et al. Feb 2021 B2
10929022 Goel et al. Feb 2021 B2
10951488 Wright et al. Mar 2021 B2
10997098 Longo et al. May 2021 B2
11212196 Wright et al. Dec 2021 B2
11327910 Longo et al. May 2022 B2
11379119 Wright Jul 2022 B2
11386120 Cantwell et al. Jul 2022 B2
20010056543 Isomura et al. Dec 2001 A1
20020073354 Schroiff et al. Jun 2002 A1
20020091897 Chiu et al. Jul 2002 A1
20020116569 Kim et al. Aug 2002 A1
20020156891 Ulrich et al. Oct 2002 A1
20020158898 Hsieh et al. Oct 2002 A1
20020174419 Alvarez et al. Nov 2002 A1
20020175938 Hackworth Nov 2002 A1
20020188711 Meyer et al. Dec 2002 A1
20030005147 Enns et al. Jan 2003 A1
20030084251 Gaither et al. May 2003 A1
20030105928 Ash et al. Jun 2003 A1
20030115204 Greenblatt et al. Jun 2003 A1
20030115282 Rose Jun 2003 A1
20030120869 Lee et al. Jun 2003 A1
20030126118 Burton et al. Jul 2003 A1
20030126143 Roussopoulos et al. Jul 2003 A1
20030135729 Mason et al. Jul 2003 A1
20030145041 Dunham et al. Jul 2003 A1
20030159007 Sawdon et al. Aug 2003 A1
20030163628 Lin et al. Aug 2003 A1
20030172059 Andrei Sep 2003 A1
20030182312 Chen et al. Sep 2003 A1
20030182322 Manley et al. Sep 2003 A1
20030191916 McBrearty et al. Oct 2003 A1
20030195895 Nowicki et al. Oct 2003 A1
20030200388 Hetrick Oct 2003 A1
20030212872 Patterson et al. Nov 2003 A1
20030223445 Lodha Dec 2003 A1
20040003173 Yao et al. Jan 2004 A1
20040052254 Hooper Mar 2004 A1
20040054656 Leung et al. Mar 2004 A1
20040107281 Bose et al. Jun 2004 A1
20040133590 Henderson et al. Jul 2004 A1
20040133622 Clubb et al. Jul 2004 A1
20040133742 Vasudevan et al. Jul 2004 A1
20040153544 Kelliher et al. Aug 2004 A1
20040153863 Klotz et al. Aug 2004 A1
20040158549 Matena et al. Aug 2004 A1
20040186858 McGovern et al. Sep 2004 A1
20040205166 Demoney Oct 2004 A1
20040210794 Frey et al. Oct 2004 A1
20040215792 Koning et al. Oct 2004 A1
20040267836 Armangau et al. Dec 2004 A1
20040267932 Voellm et al. Dec 2004 A1
20050010653 McCanne Jan 2005 A1
20050027817 Novik et al. Feb 2005 A1
20050039156 Catthoor et al. Feb 2005 A1
20050043834 Rotariu et al. Feb 2005 A1
20050044244 Warwick et al. Feb 2005 A1
20050076113 Klotz et al. Apr 2005 A1
20050076115 Andrews et al. Apr 2005 A1
20050080923 Elzur Apr 2005 A1
20050091261 Wu et al. Apr 2005 A1
20050108472 Kanai et al. May 2005 A1
20050119996 Ohata et al. Jun 2005 A1
20050128951 Chawla et al. Jun 2005 A1
20050138285 Takaoka et al. Jun 2005 A1
20050144514 Ulrich et al. Jun 2005 A1
20050177770 Coatney et al. Aug 2005 A1
20050203930 Bukowski et al. Sep 2005 A1
20050216503 Charlot et al. Sep 2005 A1
20050228885 Winfield et al. Oct 2005 A1
20050246362 Borland Nov 2005 A1
20050246398 Barzilai et al. Nov 2005 A1
20060004957 Hand et al. Jan 2006 A1
20060071845 Stroili et al. Apr 2006 A1
20060072555 St. Hilaire et al. Apr 2006 A1
20060072593 Grippo et al. Apr 2006 A1
20060074977 Kothuri et al. Apr 2006 A1
20060075467 Sanda et al. Apr 2006 A1
20060085166 Ochi et al. Apr 2006 A1
20060101091 Carbajales et al. May 2006 A1
20060101202 Mannen et al. May 2006 A1
20060112155 Earl et al. May 2006 A1
20060129676 Modi et al. Jun 2006 A1
20060136718 Moreillon Jun 2006 A1
20060156059 Kitamura Jul 2006 A1
20060165074 Modi et al. Jul 2006 A1
20060206671 Aiello et al. Sep 2006 A1
20060232826 Bar-El Oct 2006 A1
20060253749 Alderegula et al. Nov 2006 A1
20060282662 Whitcomb Dec 2006 A1
20060288151 McKenney Dec 2006 A1
20070016617 Lomet Jan 2007 A1
20070033376 Sinclair et al. Feb 2007 A1
20070033433 Pecone et al. Feb 2007 A1
20070061572 Imai et al. Mar 2007 A1
20070064604 Chen et al. Mar 2007 A1
20070083482 Rathi et al. Apr 2007 A1
20070083722 Per et al. Apr 2007 A1
20070088702 Fridella et al. Apr 2007 A1
20070094452 Fachan Apr 2007 A1
20070106706 Ahrens et al. May 2007 A1
20070109592 Parvathaneni et al. May 2007 A1
20070112723 Alvarez et al. May 2007 A1
20070112955 Clemm et al. May 2007 A1
20070136269 Yamakabe et al. Jun 2007 A1
20070143359 Uppala et al. Jun 2007 A1
20070186066 Desai et al. Aug 2007 A1
20070186127 Desai et al. Aug 2007 A1
20070208537 Savoor et al. Sep 2007 A1
20070208918 Harbin et al. Sep 2007 A1
20070234106 Lecrone et al. Oct 2007 A1
20070245041 Hua et al. Oct 2007 A1
20070255530 Wolff Nov 2007 A1
20070266037 Terry et al. Nov 2007 A1
20070300013 Kitamura Dec 2007 A1
20080019359 Droux et al. Jan 2008 A1
20080065639 Choudhary et al. Mar 2008 A1
20080071939 Tanaka et al. Mar 2008 A1
20080104264 Duerk et al. May 2008 A1
20080126695 Berg May 2008 A1
20080127211 Belsey et al. May 2008 A1
20080155190 Ash et al. Jun 2008 A1
20080162079 Astigarraga et al. Jul 2008 A1
20080162990 Wang et al. Jul 2008 A1
20080165899 Rahman et al. Jul 2008 A1
20080168226 Wang et al. Jul 2008 A1
20080184063 Abdulvahid Jul 2008 A1
20080201535 Hara Aug 2008 A1
20080212938 Sato et al. Sep 2008 A1
20080228691 Shavit et al. Sep 2008 A1
20080244158 Funatsu et al. Oct 2008 A1
20080244354 Wu et al. Oct 2008 A1
20080250270 Bennett Oct 2008 A1
20080270719 Cochran et al. Oct 2008 A1
20090019449 Choi et al. Jan 2009 A1
20090031083 Willis et al. Jan 2009 A1
20090037500 Kirshenbaum Feb 2009 A1
20090037654 Allison et al. Feb 2009 A1
20090043878 Ni Feb 2009 A1
20090083478 Kunimatsu et al. Mar 2009 A1
20090097654 Blake Apr 2009 A1
20090132770 Lin et al. May 2009 A1
20090144497 Withers Jun 2009 A1
20090150537 Fanson Jun 2009 A1
20090157870 Nakadai Jun 2009 A1
20090193206 Ishii et al. Jul 2009 A1
20090204636 Li et al. Aug 2009 A1
20090210611 Mizushima Aug 2009 A1
20090210618 Bates et al. Aug 2009 A1
20090225657 Haggar et al. Sep 2009 A1
20090235022 Bates et al. Sep 2009 A1
20090235110 Kurokawa et al. Sep 2009 A1
20090249001 Narayanan et al. Oct 2009 A1
20090249019 Wu et al. Oct 2009 A1
20090271412 Lacapra et al. Oct 2009 A1
20090276567 Burkey Nov 2009 A1
20090276771 Nickolov et al. Nov 2009 A1
20090285476 Choe et al. Nov 2009 A1
20090299940 Hayes et al. Dec 2009 A1
20090307290 Barsness et al. Dec 2009 A1
20090313451 Inoue et al. Dec 2009 A1
20090313503 Atluri et al. Dec 2009 A1
20090327604 Sato et al. Dec 2009 A1
20100011037 Kazar Jan 2010 A1
20100023726 Aviles Jan 2010 A1
20100030981 Cook Feb 2010 A1
20100031000 Flynn et al. Feb 2010 A1
20100031315 Feng et al. Feb 2010 A1
20100042790 Mondal et al. Feb 2010 A1
20100057792 Ylonen Mar 2010 A1
20100070701 Iyigun et al. Mar 2010 A1
20100077101 Wang et al. Mar 2010 A1
20100077380 Baker et al. Mar 2010 A1
20100082648 Potapov et al. Apr 2010 A1
20100082790 Hussaini et al. Apr 2010 A1
20100122148 Flynn et al. May 2010 A1
20100124196 Bonar et al. May 2010 A1
20100161569 Schreter Jun 2010 A1
20100161574 Davidson et al. Jun 2010 A1
20100161850 Otsuka Jun 2010 A1
20100169415 Leggette et al. Jul 2010 A1
20100174677 Zahavi et al. Jul 2010 A1
20100174714 Asmundsson et al. Jul 2010 A1
20100191713 Lomet et al. Jul 2010 A1
20100199009 Koide Aug 2010 A1
20100199040 Schnapp et al. Aug 2010 A1
20100205353 Miyamoto et al. Aug 2010 A1
20100205390 Arakawa Aug 2010 A1
20100217953 Beaman et al. Aug 2010 A1
20100223385 Gulley et al. Sep 2010 A1
20100228795 Hahn et al. Sep 2010 A1
20100228999 Maheshwari et al. Sep 2010 A1
20100250497 Redlich et al. Sep 2010 A1
20100250712 Ellison et al. Sep 2010 A1
20100262812 Lopez et al. Oct 2010 A1
20100268983 Raghunandan Oct 2010 A1
20100269044 Ivanyi et al. Oct 2010 A1
20100280998 Goebel et al. Nov 2010 A1
20100281080 Rajaram et al. Nov 2010 A1
20100293147 Snow et al. Nov 2010 A1
20100306468 Shionoya Dec 2010 A1
20100309933 Stark et al. Dec 2010 A1
20110004707 Spry et al. Jan 2011 A1
20110022778 Schibilla et al. Jan 2011 A1
20110035548 Kimmel et al. Feb 2011 A1
20110066808 Flynn et al. Mar 2011 A1
20110072008 Mandal et al. Mar 2011 A1
20110078496 Jeddeloh Mar 2011 A1
20110087929 Koshiyama Apr 2011 A1
20110093674 Frame et al. Apr 2011 A1
20110099342 Ozdemir et al. Apr 2011 A1
20110099419 Lucas et al. Apr 2011 A1
20110119412 Orfitelli May 2011 A1
20110119668 Calder et al. May 2011 A1
20110126045 Bennett et al. May 2011 A1
20110153603 Adiba et al. Jun 2011 A1
20110153719 Santoro et al. Jun 2011 A1
20110153972 Laberge Jun 2011 A1
20110154103 Bulusu et al. Jun 2011 A1
20110161293 Vermeulen et al. Jun 2011 A1
20110161725 Allen et al. Jun 2011 A1
20110173401 Usgaonkar et al. Jul 2011 A1
20110191389 Okamoto Aug 2011 A1
20110191522 Condict et al. Aug 2011 A1
20110196842 Timashev et al. Aug 2011 A1
20110202516 Rugg et al. Aug 2011 A1
20110213928 Grube et al. Sep 2011 A1
20110231624 Fukutomi et al. Sep 2011 A1
20110238546 Certain Sep 2011 A1
20110238857 Certain et al. Sep 2011 A1
20110246733 Usgaonkar et al. Oct 2011 A1
20110246821 Eleftheriou et al. Oct 2011 A1
20110283048 Feldman et al. Nov 2011 A1
20110286123 Montgomery et al. Nov 2011 A1
20110289565 Resch et al. Nov 2011 A1
20110296133 Flynn et al. Dec 2011 A1
20110302572 Kuncoro et al. Dec 2011 A1
20110307530 Patterson Dec 2011 A1
20110311051 Resch et al. Dec 2011 A1
20110314346 Vas et al. Dec 2011 A1
20120003940 Hirano et al. Jan 2012 A1
20120011176 Aizman Jan 2012 A1
20120011340 Flynn et al. Jan 2012 A1
20120016840 Lin et al. Jan 2012 A1
20120047115 Subramanya et al. Feb 2012 A1
20120054746 Vaghani et al. Mar 2012 A1
20120063306 Sultan et al. Mar 2012 A1
20120066204 Ball et al. Mar 2012 A1
20120072656 Archak et al. Mar 2012 A1
20120072680 Kimura et al. Mar 2012 A1
20120078856 Linde Mar 2012 A1
20120084506 Colgrove et al. Apr 2012 A1
20120109895 Zwilling et al. May 2012 A1
20120109936 Zhang et al. May 2012 A1
20120124282 Frank et al. May 2012 A1
20120136834 Zhao May 2012 A1
20120143877 Kumar et al. Jun 2012 A1
20120150869 Wang et al. Jun 2012 A1
20120150930 Jin et al. Jun 2012 A1
20120151118 Flynn et al. Jun 2012 A1
20120166715 Frost et al. Jun 2012 A1
20120166749 Eleftheriou et al. Jun 2012 A1
20120185437 Pavlov et al. Jul 2012 A1
20120197844 Wang et al. Aug 2012 A1
20120210095 Nellans et al. Aug 2012 A1
20120226668 Dhamankar et al. Sep 2012 A1
20120226841 Nguyen et al. Sep 2012 A1
20120239869 Chiueh et al. Sep 2012 A1
20120240126 Dice et al. Sep 2012 A1
20120243687 Li et al. Sep 2012 A1
20120246129 Rothschild et al. Sep 2012 A1
20120246392 Cheon Sep 2012 A1
20120271868 Fukatani et al. Oct 2012 A1
20120290629 Beaverson et al. Nov 2012 A1
20120290788 Klemm et al. Nov 2012 A1
20120303876 Benhase et al. Nov 2012 A1
20120310890 Dodd et al. Dec 2012 A1
20120311246 McWilliams et al. Dec 2012 A1
20120311290 White Dec 2012 A1
20120311292 Maniwa et al. Dec 2012 A1
20120311568 Jansen Dec 2012 A1
20120317084 Liu Dec 2012 A1
20120317338 Yi et al. Dec 2012 A1
20120317353 Webman et al. Dec 2012 A1
20120317395 Segev et al. Dec 2012 A1
20120323860 Yasa et al. Dec 2012 A1
20120324150 Moshayedi et al. Dec 2012 A1
20120331471 Ramalingam et al. Dec 2012 A1
20130007097 Sambe et al. Jan 2013 A1
20130007370 Parikh et al. Jan 2013 A1
20130010966 Li et al. Jan 2013 A1
20130013654 Lacapra et al. Jan 2013 A1
20130018722 Libby Jan 2013 A1
20130018854 Condict Jan 2013 A1
20130019057 Stephens et al. Jan 2013 A1
20130024641 Talagala Jan 2013 A1
20130042065 Kasten et al. Feb 2013 A1
20130054927 Raj et al. Feb 2013 A1
20130055358 Short et al. Feb 2013 A1
20130060992 Cho et al. Mar 2013 A1
20130061169 Pearcy et al. Mar 2013 A1
20130073519 Lewis et al. Mar 2013 A1
20130073821 Flynn et al. Mar 2013 A1
20130080679 Bert Mar 2013 A1
20130080720 Nakamura et al. Mar 2013 A1
20130083639 Wharton et al. Apr 2013 A1
20130086006 Colgrove et al. Apr 2013 A1
20130086270 Nishikawa et al. Apr 2013 A1
20130086336 Canepa et al. Apr 2013 A1
20130097341 Oe et al. Apr 2013 A1
20130110783 Wertheimer et al. May 2013 A1
20130110845 Dua May 2013 A1
20130111374 Hamilton et al. May 2013 A1
20130124776 Hallak et al. May 2013 A1
20130138616 Gupta et al. May 2013 A1
20130138862 Motwani et al. May 2013 A1
20130148504 Ungureanu Jun 2013 A1
20130159512 Groves et al. Jun 2013 A1
20130159815 Jung et al. Jun 2013 A1
20130166724 Bairavasundaram et al. Jun 2013 A1
20130166727 Wright et al. Jun 2013 A1
20130166861 Takano et al. Jun 2013 A1
20130185403 Vachharajani et al. Jul 2013 A1
20130185719 Kar et al. Jul 2013 A1
20130198480 Jones et al. Aug 2013 A1
20130204902 Wang et al. Aug 2013 A1
20130219048 Arvidsson et al. Aug 2013 A1
20130219214 Samanta et al. Aug 2013 A1
20130226877 Nagai et al. Aug 2013 A1
20130227111 Wright et al. Aug 2013 A1
20130227145 Wright et al. Aug 2013 A1
20130227195 Beaverson et al. Aug 2013 A1
20130227201 Talagala et al. Aug 2013 A1
20130227236 Flynn et al. Aug 2013 A1
20130232240 Purusothaman et al. Sep 2013 A1
20130232261 Wright et al. Sep 2013 A1
20130238832 Dronamraju et al. Sep 2013 A1
20130238876 Fiske et al. Sep 2013 A1
20130238932 Resch Sep 2013 A1
20130262404 Daga et al. Oct 2013 A1
20130262412 Hawton et al. Oct 2013 A1
20130262746 Srinivasan Oct 2013 A1
20130262762 Igashira et al. Oct 2013 A1
20130262805 Zheng et al. Oct 2013 A1
20130268497 Baldwin et al. Oct 2013 A1
20130275656 Talagala et al. Oct 2013 A1
20130290249 Merriman et al. Oct 2013 A1
20130290263 Beaverson et al. Oct 2013 A1
20130298170 Elarabawy et al. Nov 2013 A1
20130304998 Palmer et al. Nov 2013 A1
20130305002 Hallak et al. Nov 2013 A1
20130311740 Watanabe et al. Nov 2013 A1
20130325828 Larson et al. Dec 2013 A1
20130326546 Bavishi et al. Dec 2013 A1
20130339629 Alexander et al. Dec 2013 A1
20130346700 Tomlinson et al. Dec 2013 A1
20130346720 Colgrove et al. Dec 2013 A1
20130346810 Kimmel et al. Dec 2013 A1
20140006353 Chen et al. Jan 2014 A1
20140013068 Yamato et al. Jan 2014 A1
20140025986 Kalyanaraman et al. Jan 2014 A1
20140052764 Michael et al. Feb 2014 A1
20140059309 Brown et al. Feb 2014 A1
20140068184 Edwards et al. Mar 2014 A1
20140081906 Geddam et al. Mar 2014 A1
20140081918 Srivas et al. Mar 2014 A1
20140082255 Powell Mar 2014 A1
20140082273 Segev Mar 2014 A1
20140089264 Talagala et al. Mar 2014 A1
20140089683 Miller et al. Mar 2014 A1
20140095758 Smith et al. Apr 2014 A1
20140095803 Kim et al. Apr 2014 A1
20140101115 Ko et al. Apr 2014 A1
20140101298 Shukla et al. Apr 2014 A1
20140108350 Marsden Apr 2014 A1
20140108797 Johnson et al. Apr 2014 A1
20140108863 Nowoczynski et al. Apr 2014 A1
20140129830 Raudaschl May 2014 A1
20140143207 Brewer et al. May 2014 A1
20140143213 Tal et al. May 2014 A1
20140149355 Gupta et al. May 2014 A1
20140149647 Guo et al. May 2014 A1
20140164715 Weiner et al. Jun 2014 A1
20140172811 Green Jun 2014 A1
20140181370 Cohen et al. Jun 2014 A1
20140185615 Ayoub et al. Jul 2014 A1
20140195199 Uluyol Jul 2014 A1
20140195480 Talagala et al. Jul 2014 A1
20140195492 Wilding et al. Jul 2014 A1
20140195564 Talagala et al. Jul 2014 A1
20140208003 Cohen et al. Jul 2014 A1
20140215129 Kuzmin et al. Jul 2014 A1
20140215147 Pan Jul 2014 A1
20140215170 Scarpino et al. Jul 2014 A1
20140215262 Li et al. Jul 2014 A1
20140223029 Bhaskar et al. Aug 2014 A1
20140223089 Kang et al. Aug 2014 A1
20140244962 Arges et al. Aug 2014 A1
20140250440 Carter et al. Sep 2014 A1
20140258681 Prasky et al. Sep 2014 A1
20140259000 Desanti et al. Sep 2014 A1
20140279917 Minh et al. Sep 2014 A1
20140279931 Gupta et al. Sep 2014 A1
20140281017 Apte Sep 2014 A1
20140281055 Davda et al. Sep 2014 A1
20140281123 Weber Sep 2014 A1
20140281131 Joshi et al. Sep 2014 A1
20140283118 Anderson et al. Sep 2014 A1
20140289476 Nayak Sep 2014 A1
20140297980 Yamazaki Oct 2014 A1
20140304548 Steffan et al. Oct 2014 A1
20140310231 Sampathkumaran et al. Oct 2014 A1
20140310373 Aviles et al. Oct 2014 A1
20140317093 Sun et al. Oct 2014 A1
20140325117 Canepa et al. Oct 2014 A1
20140325147 Nayak Oct 2014 A1
20140331061 Wright et al. Nov 2014 A1
20140344216 Abercrombie et al. Nov 2014 A1
20140344222 Morris et al. Nov 2014 A1
20140344539 Gordon et al. Nov 2014 A1
20140372384 Long et al. Dec 2014 A1
20140379965 Gole et al. Dec 2014 A1
20140379990 Pan et al. Dec 2014 A1
20140379991 Lomet et al. Dec 2014 A1
20140380092 Kim et al. Dec 2014 A1
20150019792 Swanson et al. Jan 2015 A1
20150032928 Andrews et al. Jan 2015 A1
20150039716 Przykucki, Jr. et al. Feb 2015 A1
20150039745 Degioanni et al. Feb 2015 A1
20150039852 Sen et al. Feb 2015 A1
20150040052 Noel et al. Feb 2015 A1
20150052315 Ghai et al. Feb 2015 A1
20150058577 Earl Feb 2015 A1
20150066852 Beard et al. Mar 2015 A1
20150085665 Kompella et al. Mar 2015 A1
20150085695 Ryckbosch et al. Mar 2015 A1
20150089138 Tao et al. Mar 2015 A1
20150089285 Lim et al. Mar 2015 A1
20150095555 Asnaashari et al. Apr 2015 A1
20150106556 Yu et al. Apr 2015 A1
20150112939 Cantwell et al. Apr 2015 A1
20150120754 Chase et al. Apr 2015 A1
20150121021 Nakamura et al. Apr 2015 A1
20150127922 Camp et al. May 2015 A1
20150134926 Yang et al. May 2015 A1
20150169414 Lalsangi et al. Jun 2015 A1
20150172111 Lalsangi et al. Jun 2015 A1
20150186270 Peng et al. Jul 2015 A1
20150193338 Sundaram et al. Jul 2015 A1
20150199415 Bourbonnais et al. Jul 2015 A1
20150213032 Powell et al. Jul 2015 A1
20150220402 Cantwell et al. Aug 2015 A1
20150234709 Koarashi Aug 2015 A1
20150236926 Wright et al. Aug 2015 A1
20150242478 Cantwell et al. Aug 2015 A1
20150244795 Cantwell et al. Aug 2015 A1
20150253992 Ishiguro et al. Sep 2015 A1
20150254013 Chun Sep 2015 A1
20150261446 Lee Sep 2015 A1
20150261792 Attarde et al. Sep 2015 A1
20150269201 Caso et al. Sep 2015 A1
20150286438 Simionescu et al. Oct 2015 A1
20150288671 Chan et al. Oct 2015 A1
20150293817 Subramanian et al. Oct 2015 A1
20150301964 Brinicombe et al. Oct 2015 A1
20150324236 Gopalan et al. Nov 2015 A1
20150324264 Chinnakkonda et al. Nov 2015 A1
20150339194 Kalos et al. Nov 2015 A1
20150355985 Holtz et al. Dec 2015 A1
20150363328 Candelaria Dec 2015 A1
20150370715 Samanta et al. Dec 2015 A1
20150378613 Koseki Dec 2015 A1
20160004733 Cao et al. Jan 2016 A1
20160011984 Speer et al. Jan 2016 A1
20160026552 Holden et al. Jan 2016 A1
20160034358 Hayasaka et al. Feb 2016 A1
20160034550 Ostler et al. Feb 2016 A1
20160048342 Jia et al. Feb 2016 A1
20160070480 Babu et al. Mar 2016 A1
20160070490 Koarashi et al. Mar 2016 A1
20160070618 Pundir et al. Mar 2016 A1
20160070644 D'Sa et al. Mar 2016 A1
20160070714 D'Sa et al. Mar 2016 A1
20160077744 Pundir et al. Mar 2016 A1
20160077945 Kalra et al. Mar 2016 A1
20160092125 Cowling et al. Mar 2016 A1
20160099844 Colgrove et al. Apr 2016 A1
20160117103 Gallant Apr 2016 A1
20160132396 Kimmel et al. May 2016 A1
20160139838 D'Sa et al. May 2016 A1
20160139849 Chaw et al. May 2016 A1
20160149763 Ingram et al. May 2016 A1
20160149766 Borowiec et al. May 2016 A1
20160154834 Friedman et al. Jun 2016 A1
20160179410 Haas et al. Jun 2016 A1
20160188370 Razin et al. Jun 2016 A1
20160188430 Nitta et al. Jun 2016 A1
20160197995 Lu Jul 2016 A1
20160203043 Nazari et al. Jul 2016 A1
20160246522 Krishnamachari et al. Aug 2016 A1
20160283139 Brooker et al. Sep 2016 A1
20160336223 Chase Nov 2016 A1
20160350192 Doherty et al. Dec 2016 A1
20160366223 Mason Dec 2016 A1
20160371021 Goldberg et al. Dec 2016 A1
20170003892 Sekido et al. Jan 2017 A1
20170017413 Aston et al. Jan 2017 A1
20170031769 Zheng et al. Feb 2017 A1
20170031774 Bolen et al. Feb 2017 A1
20170032005 Zheng et al. Feb 2017 A1
20170046257 Babu et al. Feb 2017 A1
20170068599 Chiu et al. Mar 2017 A1
20170083535 Marchukov et al. Mar 2017 A1
20170097771 Krishnamachari et al. Apr 2017 A1
20170097873 Krishnamachari et al. Apr 2017 A1
20170109298 Kurita et al. Apr 2017 A1
20170123726 Sinclair et al. May 2017 A1
20170212690 Babu et al. Jul 2017 A1
20170212891 Pundir et al. Jul 2017 A1
20170212919 Li et al. Jul 2017 A1
20170220777 Wang et al. Aug 2017 A1
20170269980 Gupta et al. Sep 2017 A1
20170300248 Purohit et al. Oct 2017 A1
20170315740 Corsi et al. Nov 2017 A1
20170315878 Purohit et al. Nov 2017 A1
20170329593 McMullen Nov 2017 A1
20170351543 Kimura Dec 2017 A1
20180287951 Waskiewicz, Jr. et al. Oct 2018 A1
20220103436 Wright et al. Mar 2022 A1
Foreign Referenced Citations (7)
Number Date Country
0726521 Aug 1996 EP
1970821 Sep 2008 EP
2693358 Feb 2014 EP
2735978 May 2014 EP
2006050455 May 2006 WO
2012132943 Oct 2012 WO
2013101947 Jul 2013 WO
Non-Patent Literature Citations (90)
Entry
Agrawal, et al., “Design Tradeoffs for SSD Performance,” USENIX Annual Technical Conference, 2008, 14 Pages.
Alvaraez C., “NetApp Deduplication for FAS and V-Series Deployment and Implementation Guide,” Technical Report TR-3505, 2011, 75 pages.
Amit et al., “Strategies for Mitigating the IOTLB Bottleneck,” Technion—Israel Institute of Technology, IBM Research Haifa, WIOSCA 2010—Sixth Annual Workshop on the Interaction between Operating Systems and Computer Architecture, 2010, 12 pages.
Arpaci-Dusseau R., et al., “Log-Structured File Systems,” Operating Systems: Three Easy Pieces published by Arpaci-Dusseau Books, May 25, 2014, 15 pages.
Balakrishnan, M., et al., “CORFU: A Shared Log Design for Flash Clusters,” Microsoft Research Silicon Vally, University of California, San Diego, Apr. 2012, https://www.usenix.org/conference/nsdi12/technical-sessions/presentation/balakrishnan, 14 pages.
Ben-Yehuda et al., “The Price of Safety: Evaluating IOMMU Performance,” Proceedings of the Linux Symposium, vol. 1, Jun. 27-30, 2007, pp. 9-20.
Bitton D. et al., “Duplicate Record Elimination in Large Data Files,” Oct. 26, 1999, 11 pages.
Bogaerdt, “Cdeftutorial,” http://oss.oetiker.ch/rrdtool/tut/cdeftutorial.en.html Date obtained from the internet, Sep. 9, 2014, 14 pages.
Bogaerdt, “Rates, Normalizing and Consolidating,” http://www.vandenbogaerdl.nl/rrdtool/process.php Date obtained from the internet: Sep. 9, 2014, 5 pages.
Bogaerdt, “rrdtutorial,” http://oss.oetiker.ch/rrdtool/lul/rrdtutorial.en.html Date obtained from the internet, Sep. 9, 2014, 21 pages.
Chris K., et al., “How Many Primes are There?,” Nov. 2001, Retrieved from Internet: https://web.archive.org/web/20011120073053/http://primes.utm.edu/howmany.shtml, 5 pages.
CornwellM., “Anatomy of a Solid-state Drive,” ACM Queue-Networks, Oct. 2012, vol. 10 (10), pp. 1-7.
Culik K., et al., “Dense Multiway Trees,” ACM Transactions on Database Systems, Sep. 1981, vol. 6 (3), pp. 486-512.
Debnath B., et al., “FlashStore: High Throughput Persistent Key-Value Store,” Proceedings of the VLDB Endowment VLDB Endowment, Sep. 2010, vol. 3 (1-2), pp. 1414-1425.
Debnath, et al., “ChunkStash: Speeding up In line Storage Deduplication using Flash Memory,” USENIX, USENIXATC '10, Jun. 2010, 15 pages.
Dictionary, “Definition for References,” Retrieved from Internet: http://www.dictionary.com/browse/reference?s=t on Dec. 23, 2017, 5 pages.
Enclopedia Entry, “Pointers vs. References,” Retrieved from Internet: https://www.geeksforgeeks.org/pointers-vs-references-cpp/ on Dec. 23, 2017, 5 pages.
Extended European Search Report for Application No. 20201330.6 dated Dec. 8, 2020, 7 pages.
Extended European Search Report for Application No. 20205866.5 dated Dec. 8, 2020, 7 pages.
Extended European Search Report dated Apr. 9, 2018 for EP Application No. 15855480.8 filed Oct. 22, 2015, 7 pages.
Fan, et al., “MemC3: Compact and Concurrent MemCache with Dumber Caching and Smarter Hashing,” USENIX NSDI '13, Apr. 2013, pp. 371-384.
Gal E., et al., “Algorithms and Data Structures for Flash Memories,” ACM Computing Surveys (CSUR) Archive, Publisher ACM, New York City, NY, USA, Jun. 2005, vol. 37 (2), pp. 138-163.
Gray J., et al., “Flash Disk Opportunity for Server Applications,” Queue—Enterprise Flash Storage, Jul.-Aug. 2008, vol. 6 (4), pp. 18-23.
Gulati A., et al., “BASIL: Automated IO Load Balancing Across Storage Devices,” Proceedings of the 8th USENIX Conference on File and Storage Technologies, FAST'10, Berkeley, CA, USA, 2010, 14 pages.
Handy J., “SSSI Tech Notes: How Controllers Maximize SSD Life,” SNIA, Jan. 2013, pp. 1-20.
Hwang K., et al., “RAID-x: A New Distributed Disk Array for I/O-centric Cluster Computing,” IEEE High-Performance Distributed Computing, Aug. 2000, pp. 279-286.
IBM Technical Disclosure Bulletin, “Optical Disk Axial Runout Test”, vol. 36, No. 10, NN9310227, Oct. 1, 1993, 3 pages.
Intel, Product Specification—Intel® Solid-State Drive DC S3700, Jun. 2013, 34 pages.
International Search Report and Written Opinion for Application No. PCT/EP2014/071446 dated Apr. 1, 2015, 14 pages.
International Search Report and Written Opinion for Application No. PCT/US2012/071844 dated Mar. 1, 2013, 12 pages.
International Search Report and Written Opinion for Application No. PCT/US2014/035284 dated Apr. 1, 2015, 8 pages.
International Search Report and Written Opinion for Application No. PCT/US2014/055138 dated Dec. 12, 2014, 13 pages.
International Search Report and Written Opinion for Application No. PCT/US2014/058728 dated Dec. 16, 2014, 11 pages.
International Search Report and Written Opinion for Application No. PCT/US2014/060031 dated Jan. 26, 2015, 9 pages.
International Search Report and Written Opinion for Application No. PCT/US2014/071446 dated Apr. 1, 2015, 13 pages.
International Search Report and Written Opinion for Application No. PCT/US2014/071465 dated Mar. 25, 2015, 12 pages.
International Search Report and Written Opinion for Application No. PCT/US2014/071484 dated Mar. 25, 2015, 9 pages.
International Search Report and Written Opinion for Application No. PCT/US2014/071581 dated Apr. 10, 2015, 9 pages.
International Search Report and Written Opinion for Application No. PCT/US2014/071635 dated Mar. 31, 2015, 13 pages.
International Search Report and Written Opinion for Application No. PCT/US2015/016625 dated Sep. 17, 2015, 8 pages.
International Search Report and Written Opinion for Application No. PCT/US2015/021285 dated Jun. 23, 2015, 8 pages.
International Search Report and Written Opinion for Application No. PCT/US2015/024067 dated Jul. 8, 2015, 7 pages.
International Search Report and Written Opinion for Application No. PCT/US2015/048800 dated Nov. 25, 2015, 11 pages.
International Search Report and Written Opinion for Application No. PCT/US2015/048810 dated Dec. 23, 2015, 11 pages.
International Search Report and Written Opinion for Application No. PCT/US2015/048833 dated Nov. 25, 2015, 11 pages.
International Search Report and Written Opinion for Application No. PCT/US2015/056932 dated Jan. 21, 2016, 11 pages.
International Search Report and Written Opinion for Application No. PCT/US2015/057532 dated Feb. 9, 2016, 12 pages.
International Search Report and Written Opinion for Application No. PCT/US2016/059943 dated May 15, 2017, 14 pages.
International Search Report and Written Opinion for Application No. PCT/US2018/025951, dated Jul. 18, 2018, 16 pages.
Jones, M., “Next-generation Linux File Systems: NiLFS(2) and Eofs,” IBM, 2009.
Jude Nelson “Syndicate: Building a Virtual Cloud Storage Service Through Service Composition” Princeton University, 2013, pp. 1-14.
Kagel A.S, “Two-way Merge Sort,” Dictionary of Algorithms and Data Structures [online], retrieved on Jan. 28, 2015, Retrieved from the Internet : URL: http://xlinux.nist.gov/dads/HTMUlwowaymrgsrl.html, May 2005, 1 page.
Konishi, R., et al., “Filesystem Support for Continuous Snapshotting,” Ottawa Linux Symposium, 2007.
Lamport L., “The Part-Time Parliament,” ACM Transactions on Computer Systems, May 1998, vol. 16 (2), pp. 133-169.
Leventhal A.H., “A File System All its Own,” Communications of the ACM Queue, May 2013, vol. 56 (5), pp. 64-67.
Lim H., et al., “SILT: A Memory-Efficient, High-Performance Key-Value Store,” Proceedings of the 23rd ACM Symposium on Operating Systems Principles (SOSP'11), Oct. 23-26, 2011, pp. 1-13.
Metreveli, Z., et al., “CPHash: A Cache-Partitioned Hash Table,”2011, 10 pages. URL: https://people.csail.mit.edu/nickolai/papers/metrevelicphash-%20tr.pdf.
Moshayedi M., et al., “Enterprise SSDs,” ACM Queue—Enterprise Flash Storage, Jul.-Aug. 2008, vol. 6 (4), pp. 32-39.
Odlevak, “Simple Kexec Example”, https://www.linux.com/blog/simple-kexec-example, accessed on Feb. 5, 2019 (Year: 2011), 4 pages.
Oetiker, “RRDfetch,” http://oss.oetiker.ch/rrdtool/doc/rrdfetch.en.html, Date obtained from the internet: Sep. 9, 2014, 5 pages.
Oetiker, “RRDtool,” Retrieved from the Internet: Sep. 9, 2014, 5 pages, URL: http://loss.oetiker.ch/rrdtool/doc/rrdtool.en.html.
O'Neil P., at al., “The Log-structured Merge-tree (lsm-tree),” Acta Informatica, 33, 1996, pp. 351-385.
Ongaro D., et al., “In Search of an Understandable Consensus Algorithm (Extended Version),” 2014, 18 pages.
Ongaro D., et al., “In Search of an Understandable Consensus Algorithm,” Stanford University, URL: https://ramcloud.stanford.edu/wiki/download/attachments/11370504/raft.pdf, May 2013, 14 pages.
Pagh R., et al., “Cuckoo Hashing,” Elsevier Science, Dec. 8, 2003, pp. 1-27.
Pagh R., et al., “Cuckoo Hashing for Undergraduates,” IT University of Copenhagen, Mar. 27, 2006, pp. 1-6.
“Pivot Root,” Die.net, retrieved from https://linux.die.net/pivot_root on Nov. 12, 2011, 2 pages.
Proceedings of the FAST 2002 Conference on File Storage Technologies, Monterey, California, USA, Jan. 28-30, 2002, 14 pages.
Rosenblum M., et al., “The Design and Implementation of a Log-Structured File System,” In Proceedings of ACM Transactions on Computer Systems, vol. 10(1),Feb. 1992, pp. 26-52.
Rosenblum M., et al., “The Design and Implementation of a Log-Structured File System,” Proceedings of the 13th ACM Symposium on Operating Systems Principles, (SUN00007382-SUN00007396), Jul. 1991, 15 pages.
Rosenblum M., et al., “The Design and Implementation of a Log-Structured File System,” (SUN00006867-SUN00006881), Jul. 1991, 15 pages.
Rosenblum M., et al., “The LFS Storage Manager,” USENIX Technical Conference, Anaheim, CA, (Sun 00007397-SUN00007412), Jun. 1990, 16 pages,.
Rosenblum M., et al., “The LFS Storage Manager,” USENIX Technical Conference, Computer Science Division, Electrical Engin. and Computer Sciences, Anaheim, CA, presented at Summer '90 USENIX Technical Conference, (SUN00006851-SUN00006866), Jun. 1990, 16 pages.
Rosenblum M., “The Design and Implementation of a Log-Structured File System,” UC Berkeley, 1992, pp. 1-101.
Sears., et al., “Bism: A General Purpose Log Structured Merge Tree,” Proceedings of the 2012 ACM SIGMOD International Conference on Management, 2012, 12 pages.
Seltzer M., et al., “An Implementation of a Log Structured File System for UNIX,” Winter USENIX, San Diego, CA, Jan. 25-29, 1993, pp. 1-18.
Seltzer M.I., et al., “File System Performance and Transaction Support,” University of California at Berkeley Dissertation, 1992, 131 pages.
Smith K., “Garbage Collection,” Sand Force, Flash Memory Summit, Santa Clara, CA, Aug. 2011, pp. 1-9.
Stoica I., et al., “Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications.” SIGCOMM'01, Aug. 2001, 12 pages.
Supplementary European Search Report for Application No. EP12863372 dated Jul. 16, 2015, 7 pages.
Texas Instruments, User Guide, TMS320C674x/OMAP-L1 x Processor Serial ATA (SATA) Controller, Mar. 2011, 76 Pages.
Twigg A., et al., “Stratified B-trees and Versioned Dictionaries,” Proceedings of the 3rd US EN IX Conference on Hot Topics in Storage and File Systems, 2011, vol. 11, pp. 1-5.
Waskiewicz P.J., “Scaling With Multiple Network Namespaces in a Single Application,” Netdev 1.2—The Technical Conference on Linux Networking, retrieved from internet: URL; https://netdevconf.orq/1.2/papers/pj-netdev-1.2pdf Dec. 12, 2016, 5 pages.
Wei Y., et al., “NAND Flash Storage Device Performance in Linux File System,” 6th International Conference on Computer Sciences and Convergence Information Technology (ICCIT), 2011.
Wikipedia, “Cuckoo Hashing,” http://en.wikipedia.org/wiki/Cuckoo_hash, Apr. 2013, pp. 1-5.
Wilkes J., et al., “The Hp Auto Raid Hierarchical Storage System,” Operating System Review, ACM, New York, NY, Dec. 1, 1995, vol. 29 (5), pp. 96-108.
Wu P-L., et al., “A File-System-Aware FTL Design for Flash-Memory Storage Systems,” IEEE, Design, Automation & Test in Europe Conference & Exhibition, 2009, pp. 1-6.
Yossef., “Building Murphy-Compatible Embedded Linux Systems,” Proceedings of the Linux Symposium, Ottawa, Ontario Canada, Jul. 2005, 24 pages.
European Office Action for Application No. EP20201330 dated Oct. 7, 2022, 5 pages.
European Office Action for Application No. EP20205866 dated Oct. 10, 2022, 6 pages.
Related Publications (1)
Number Date Country
20220261362 A1 Aug 2022 US
Continuations (3)
Number Date Country
Parent 17203094 Mar 2021 US
Child 17739391 US
Parent 16867418 May 2020 US
Child 17203094 US
Parent 15270973 Sep 2016 US
Child 16867418 US