A distributed storage system may include a plurality of storage devices (e.g., storage arrays) to provide data storage to a plurality of nodes. The plurality of storage devices and the plurality of nodes may be situated in the same physical location, or in one or more physically remote locations. A distributed storage system may include data protection systems that back up production site data by replicating production site data on a secondary backup storage system. The production site data may be replicated on a periodic basis and/or may be replicated as changes are made to the production site data. The backup storage system may be situated in the same physical location as the production storage system, or in a physically remote location.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
One aspect provides a method that may include determining one or more properties for each of a plurality of input/output (I/O) to a production volume of a storage system and monitoring one or more operating conditions of the storage system. In embodiments, a method may include determining a score for each I/O based upon one or more of: the one or more properties of the I/O and the one or more operating conditions; adapting a replication threshold based upon the one or more operating conditions; comparing the determined score for each I/O to the adapted replication threshold; and based upon the comparison, performing continuous replication or snapshot replication for each I/O.
In another aspect, a system may include a processor and memory storing computer program code that when executed on the processor causes the processor to operate a storage system for performing determining one or more properties for each of a plurality of input/output (I/O) to a production volume of a storage system and monitoring one or more operating conditions of the storage system. Embodiments of a system may include determining a score for each I/O based upon one or more of: the one or more properties of the I/O and the one or more operating conditions; adapting a replication threshold based upon the one or more operating conditions; comparing the determined score for each I/O to the adapted replication threshold; and based upon the comparison, performing continuous replication or snapshot replication for each I/O. Another aspect provides a computer program product including a non-transitory computer readable storage medium having computer program code encoded thereon that when executed on a processor of a computer causes the computer to operate a storage system, the computer program product may include computer program code for: determining one or more properties for each of a plurality of input/output (I/O) to a production volume of a storage system and monitoring one or more operating conditions of the storage system. The computer program product may further include instructions for determining a score for each I/O based upon one or more of: the one or more properties of the I/O and the one or more operating conditions; adapting a replication threshold based upon the one or more operating conditions; comparing the determined score for each I/O to the adapted replication threshold; and based upon the comparison, performing continuous replication or snapshot replication for each I/O.
Objects, aspects, features, and advantages of embodiments disclosed herein will become more fully apparent from the following detailed description, the appended claims, and the accompanying drawings in which like reference numerals identify similar or identical elements. Reference numerals that are introduced in the specification in association with a drawing figure may be repeated in one or more subsequent figures without additional description in the specification in order to provide context for other features. For clarity, not every element may be labeled in every figure. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments, principles, and concepts. The drawings are not meant to limit the scope of the claims included herewith.
Before describing concepts, structures, and techniques, some terms are explained. As used herein, the term “I/O request” or simply “I/O” may refer to an input or output request, for example a data read or a data write request. The term “storage system” may encompass physical computing systems, cloud or virtual computing systems, or a combination thereof. The term “storage device” may refer to any non-volatile memory (NVM) device, including hard disk drives (HDDs), solid state drivers (SSDs), flash devices (e.g., NAND flash devices), and similar devices that may be accessed locally and/or remotely (e.g., via a storage attached network (SAN), etc.). The term “storage device” may also refer to a storage array including multiple storage devices.
Referring to the illustrative embodiment shown in
In certain embodiments, Site I 100a and Site II 100b may be remote from one another. In other embodiments, Site I 100a and Site II 100b may be local to one another and may be connected via a local area network (LAN). Local data protection may have the advantage of minimizing data lag between target and source, and remote data protection may have the advantage of being robust in the event that a disaster occurs at the source site.
In particular embodiments, data protection system 100 may include a failover mode of operation, wherein the direction of replicated data flow is reversed. For example, Site I 100a may behave as a target site and Site II 100b may behave as a source site. Failover may be triggered either manually (e.g., by a user) or automatically and may be performed in the event of a disaster at Site I 100a. In some embodiments, both Site I 100a and Site II 100b may behave as source site for some stored data and may behave simultaneously as a target site for other stored data. A portion of stored data may be replicated from one site to the other, and another portion may not be replicated.
Site I 100a may correspond to a production site (e.g., a facility where one or more hosts run data processing applications that write data to a storage system and read data from the storage system) and Site II 100b may correspond to a backup or replica site (e.g., a facility where replicated production site data is stored). In such embodiments, Site II 100b may be responsible for replicating production site data and may enable rollback of data of Site I 100a to an earlier point in time. Rollback may be used in the event of data corruption of a disaster, or alternatively in order to view or to access data from an earlier point in time.
As shown in
Each storage system 108 and 120 may include storage devices for storing data, such as disks or arrays of disks. Storage systems 108 and 120 may be target nodes. In order to enable initiators to send requests to storage system 108, storage system 108 may provide (e.g., expose) one or more logical units (LU) to which commands are issued. Thus, in some embodiments, storage systems 108 and 120 may be SAN entities that provide multiple logical units for access by multiple SAN initiators. An LU is a logical entity provided by a storage system for accessing data stored therein. A logical unit may be a physical logical unit or a virtual logical unit, and may be identified by a unique logical unit number (LUN).
In the embodiment shown in
Source host 104 may generate a host device 140 (“Device A”) corresponding to LU A 136 and source host 116 may generate a host device 160 (“Device B”) corresponding to LU B 156. A host device may be a logical entity within a host through which the host may access an LU. In some embodiments, an operating system of a host may generate a host device for each LU exposed by the storage system in the host SAN.
Source host 104 may act as a SAN initiator that issues I/O requests through host device 140 to LU A 136 using, for example, SCSI commands. In some embodiments, such requests may be transmitted to LU A 136 with an address that includes a specific device identifier, an offset within the device, and a data size.
Source DPA 112 and target DPA 124 may perform various data protection services, such as data replication of a storage system, and journaling of I/O requests issued by hosts 104 and/or 116. When acting as a target DPA, a DPA may also enable rollback of data to an earlier point-in-time (PIT), and enable processing of rolled back data at the target site. In some embodiments, each DPA 112 and 124 may be a physical device, a virtual device, or may be a combination of a virtual and physical device.
In some embodiments, a DPA may be a cluster of such computers. Use of a cluster may ensure that if a DPA computer is down, then the DPA functionality switches over to another computer. In some embodiments, the DPA computers within a DPA cluster may communicate with one another using at least one communication link suitable for data transfer, for example, an InfiniBand (IB) link, a Fibre Channel (FC) link, and/or a network link, such as an Ethernet or Internet (e.g., TCP/IP) link to transfer data via fiber channel or IP based protocols, or other such transfer protocols. In some embodiments, one computer from the DPA cluster may serve as the DPA leader. The DPA cluster leader may coordinate between the computers in the cluster, and may also perform other tasks that require coordination between the computers, such as load balancing.
In certain embodiments, a DPA may be a standalone device integrated within a SAN. Alternatively, a DPA may be integrated into storage system. The DPAs communicate with their respective hosts through communication links suitable for data transfer, for example, an InfiniBand (IB) link, a Fibre Channel (FC) link, and/or a network link, such as an Ethernet or Internet (e.g., TCP/IP) link to transfer data via, for example, SCSI commands or any other protocol.
In various embodiments, the DPAs may act as initiators in the SAN. For example, the DPAs may issue I/O requests using, for example, SCSI commands, to access LUs on their respective storage systems. Each DPA may also be configured with the necessary functionality to act as targets, e.g., to reply to I/O requests, such as SCSI commands, issued by other initiators in the SAN, including their respective hosts. In some embodiments, being target nodes, the DPAs may dynamically expose or remove one or more LUs. As described herein, Site I 100a and Site II 100b may each behave simultaneously as a production site and a backup site for different logical units. As such, DPA 112 and DPA 124 may each behave as a source DPA for some LUs and as a target DPA for other LUs, at the same time.
In the example embodiment shown in
A protection agent may change its behavior for handling SCSI commands, for example as a result of an instruction received from the DPA. For example, the behavior of a protection agent for a certain host device may depend on the behavior of its associated DPA with respect to the LU of the host device. When a DPA behaves as a source site DPA for a certain LU, then during normal course of operation, the associated protection agent may split I/O requests issued by a host to the host device corresponding to that LU. Similarly, when a DPA behaves as a target device for a certain LU, then during normal course of operation, the associated protection agent fails I/O requests issued by host to the host device corresponding to that LU.
Communication between protection agents 144 and 164 and a respective DPA 112 and 124 may use any protocol suitable for data transfer within a SAN, such as fiber channel, SCSI over fiber channel, or other protocols. The communication may be direct, or via a logical unit exposed by the DPA.
In certain embodiments, protection agents may be drivers located in their respective hosts. Alternatively, in some embodiments, a protection agent may also be located in a fiber channel switch, or in any other device situated in a data path between a host and a storage system or on the storage system itself. In some embodiments, in a virtualized environment, the protection agent may run at the hypervisor layer or in a virtual machine providing a virtualization layer.
As shown in the example embodiment shown in
In some embodiments, journal processor 180 may manage the journal entries of LU B 156. For example, journal processor 180 may enter write transactions received by the target DPA 124 from the source DPA 112 into the journal by writing them into journal LU 176, read the undo information for the transaction from LU B 156, update the journal entries in journal LU 176 with undo information, apply the journal transactions to LU B 156, and remove already-applied transactions from the journal. In one embodiment, journal processor 180 may perform processing such as described in U.S. Pat. No. 7,516,287, issued Apr. 7, 2009 and entitled “Methods and Apparatus for Optimal Journaling for Continuous Data Replication,” which is hereby incorporated by reference herein. Other embodiments may not employ thin devices and tracking regions for replication, and may instead replicate write transactions using an array's native snapshot capabilities.
Some embodiments of data protection system 100 may be provided as physical systems for the replication of physical LUs, or as virtual systems for the replication of virtual LUs. For example, a hypervisor may consume LUs and may generate a distributed file system on the logical units such as Virtual Machine File System (VMFS) that may generate files in the file system and expose the files as LUs to the virtual machines (each virtual machine disk is seen as a SCSI device by virtual hosts). In another embodiment, a hypervisor may consume a network based file system and exposes files in the Network File System (NFS) as SCSI devices to virtual hosts.
In normal operation (sometimes referred to as “production mode”), described embodiments of DPA 112 may act as a source DPA for LU A 136. Thus, protection agent 144 may act as a source protection agent, specifically by splitting I/O requests to host device 140 (“Device A”). Protection agent 144 may send an I/O request to source DPA 112 and, after receiving an acknowledgement from source DPA 112, may send the I/O request to LU A 136. After receiving an acknowledgement from storage system 108, host 104 may acknowledge that the I/O request has successfully completed.
When source DPA 112 receives a replicated I/O request from protection agent 144, source DPA 112 may transmit certain I/O information characterizing the write request, packaged as a “write transaction”, over WAN 128 to target DPA 124 for journaling and for incorporation within target storage system 120. When applying write operations to storage system 120, target DPA 124 may act as an initiator, and may send SCSI commands to LU B 156.
In some embodiments, source DPA 112 may send its write transactions to target DPA 124 using a variety of modes of transmission, including (i) a synchronous mode, (ii) an asynchronous mode, and (iii) a snapshot mode.
In synchronous mode, source DPA 112 may send each write transaction to target DPA 124, may receive back an acknowledgement from the target DPA 124, and in turn may send an acknowledgement back to protection agent 144. Protection agent 144 may wait until receipt of such acknowledgement before sending the I/O request to LU 136.
In asynchronous mode, source DPA 112 may send an acknowledgement to protection agent 144 upon receipt of each I/O request, before receiving an acknowledgement back from target DPA 124.
In snapshot mode, source DPA 112 may receive several I/O requests and combine them into an aggregate “snapshot” or “batch” of write activity performed in the multiple I/O requests, and may send the snapshot to target DPA 124 for journaling and incorporation in target storage system 120. Source DPA 112 may send an acknowledgement to protection agent 144 upon receipt of each I/O request, before receiving an acknowledgement back from target DPA 124.
As described herein, a snapshot replica may be a differential representation of a volume. For example, the snapshot may include pointers to the original volume, and may point to log volumes for locations of the original volume that store data changed by one or more I/O requests. Snapshots may be combined into a snapshot array, which may represent different images over a time period (e.g., for multiple PITs).
As described herein, in normal operation, LU B 156 may be used as a backup of LU A 136. As such, while data written to LU A 136 by host 104 is replicated from LU A 136 to LU B 156, target host 116 should not send I/O requests to LU B 156. To prevent such I/O requests from being sent, protection agent 164 may act as a target site protection agent for host device B 160 and may fail I/O requests sent from host 116 to LU B 156 through host device B 160. In a recovery mode, target DPA 124 may undo the write transactions in journal LU 176 so as to restore the target storage system 120 to an earlier state.
Referring to
Referring to both
In such embodiments, since the journal contains the “undo” information necessary to rollback storage system 120, data that was stored in specific memory locations at a specified point in time may be obtained by undoing write transactions that occurred subsequent to such point in time (PIT). Each of the four streams may hold a plurality of write transaction data. As write transactions are received dynamically by the target DPA, the write transactions may be recorded at the end of the DO stream and the end of the DO METADATA stream, prior to performing the transaction.
In some embodiments, a metadata stream (e.g., UNDO METADATA stream or the DO METADATA stream) and the corresponding data stream (e.g., UNDO stream or DO stream) may be kept in a single stream by interleaving metadata and data.
Some described embodiments may validate that point-in-time (PIT) data replicas (e.g., data replicated to LU B 156) are valid and usable, for example to verify that the data replicas are not corrupt due to a system error or inconsistent due to violation of write order fidelity. Validating data replicas can be important, for example, in data replication systems employing incremental backup where an undetected error in an earlier data replica may lead to corruption of future data replicas.
In some conventional systems, validating data replicas can increase the journal lag for a transaction, which may increase a recovery time objective (RTO) of the data protection system (e.g., an elapsed time between replicas or PITs). In such conventional systems, if the journal lag time is significant, the journal may become full and unable to account for data changes due to subsequent transactions. Further, in such conventional systems, validating data replicas may consume system resources (e.g., processor time, memory, communication link bandwidth, etc.), resulting in reduced performance for system tasks.
Referring to
As shown in
The differential VMDKs 346 may be used to store differential snapshot data representative of changes that happened to data stored on production VMDK 342. In one example, a first differential VMDK 346 may include changes due to writes that occurred to production VMDK 342 from time t1 to time t2, a second differential VMDK 346 may include the changes due to writes that occurred to production VMDK 342 from time t2 to time t3, and so forth.
In some embodiments, differential VMDKs 346 may be thin provisioned. In such embodiments, thin provisioning may allocate storage space to volumes of a SAN in a flexible manner among multiple volumes based on a minimum space requirement for each volume at any given time.
In some embodiments, data protection system 100 may include one or more consistency groups. A consistency group may treat source volumes (e.g., production volumes) and target volumes (e.g., backup volumes) as a single logical entity for data replication and migration.
Journal 352 may be stored in journal VMDK 348. In some embodiments, journal 352 includes one or more delta marker streams (DMS) 362. Each DMS 362 may include metadata associated with data that may be different between one differential VMDK and another differential VMDK. In one example, DMS 362 may include metadata differences between a current copy of the production VMDK 342 and a copy currently stored in backup storage 304. In some embodiments, journal 352 does not include the actual data changes, but rather metadata associated with the changes. In some embodiments, the data of the changes may be stored in the differential VMDKs. Thus, some embodiments may operate employing thin volumes to perform data replication by tracking regions for replications with the thin devices, as described herein. Other embodiments may operate to replicate data directly (e.g., without employing thin devices) from a source storage to a target (or replica) storage.
Although not shown in
As described here, data protection systems may employ continuous replication and/or snapshot replication to protect production data. For example, in continuous replication, every write I/O to a production volume is intercepted and sent to both the production volume and a replica volume. Thus, continuous replication may provide a very low Recovery Point Objective (RPO), meaning that data on a replica volume lags data on the production volume by only a short time period (e.g., a few seconds). RPO may be an amount of data that the user is willing to lose in case of production disaster (e.g., an amount of time between replications). At the extreme case, synchronous continuous replication may provide an RPO of zero (e.g., data on the replica volume is the same as data on the production volume). Further, continuous replication may provide high granularity of points in time (PITs) for restoring a production volume (e.g., since continuous replication may generate a replica each time there is a write operation to the production volume).
In continuous replication, data is sent to the replica “inline” (e.g., as part of the write operation), thus, in continuous replication it may be unnecessary to read data from the production volume to generate a replica. However, since every write operation sent to the production volume is also sent to the replica volume, network bandwidth requirements of continuous replication can be high (e.g., as high as the bandwidth of peak writes to the production volume).
In snapshot replication, snapshot replicas of a production volume are periodically generated after a time interval (e.g., the snapshot interval), and changes in data may be tracked between consecutive snapshot replicas. For example, one or more write operations may modify data on the production volume between generation of snapshot replicas. In some embodiments, regions of the production volume that are modified, and the changed data written to the regions, may be tracked. When a new snapshot replica is generated, modified regions may be read from the production volume and sent to the replica volume.
If there were numerous overwrites to the same region during a given snapshot interval, these changes may be “batched” or “folded” such that only the final content of the region is sent to the replica volume. In such embodiments, the bandwidth required for snapshot replication can be lower than then bandwidth required for continuous replication since less data is sent to the replica volume. However, this reduction in required bandwidth may be at the expense of providing longer RPOs than continuous replication and, thus, larger granularity of PITs that can be recovered (e.g., the lag between replicas may be large, for example, several minutes or hours). Further, snapshot replication may require storage space to track changes between snapshots and reading modified data from the production volume, which may delay user access to the production volume. Some embodiments may employ a hybrid replication mode that combines elements of snapshot replication and elements of continuous replication, for example, as described in U.S. patent application Ser. No. 15/274,362 entitled “Hybrid Continuous and Snapshot Replication in a Storage System” filed on Sep. 23, 2016 and U.S. patent application Ser. No. 15/275,677 entitled “Multilevel Snapshot Replication for Hot and Cold Regions of a Storage System” filed on Sep. 26, 2016, both of which are assigned to EMC IP Holding Company LLC, and both of which are hereby incorporated by reference herein. Such hybrid replication may perform continuous replication for some regions of the production volume and snapshot replication for other regions of the production volume, for example based upon usage characteristics of the regions (e.g., based on how often the region is accessed, a priority associated with the region, etc.). Every I/O that is continuously replicated reduces the size of the next snapshot, and reduces the read overhead from the production volume when the next snapshot is generated (e.g., at the end of a current snapshot interval).
As will be described, some embodiments may perform hybrid replication (e.g., hybrid snapshot and continuous replication) based upon properties of one or more received write requests (or I/Os) and system conditions. For example, each received I/O may be processed to determine properties of the I/O, such as compressibility of the I/O, deduplication ability of the I/O, alignment of the I/O to segments of the storage volume, and other properties of each I/O. Further, as I/Os are received, system conditions may be determined, such as bandwidth usage, production volume processing load, replication volume processing load, replication volume storage availability, and other system conditions. For example, storage volume usage conditions may be determined whether certain regions of the production volume are frequently accessed and overwritten (e.g., hot regions or hotspots) or other regions are infrequently accessed and overwritten (e.g., cold regions or cold spots), and adjust replication based upon usage of individual regions of the production volume (e.g., frequently accessed regions may be replicated by snapshots, while infrequently accessed regions may be replicated continuously). Thus, in described embodiments, real-time properties of received I/Os and real-time operating conditions of the storage system may be considered to dynamically update replication settings of the storage system.
As will be described, some embodiments may determine a score or rating for each received I/O. This score or rating may then be used to classify I/Os for either continuous replication or snapshot replication. For example, the score may be based upon the determined properties of the I/O and/or the determined system conditions. The score may be compared to one or more classification thresholds to determine whether continuous replication or snapshot replication should be employed. For example, the classification threshold(s) may be based upon the determined system conditions. In some embodiments, the score of each I/O and the classification threshold(s) may be dynamically adjusted to adjust the replication settings of the storage system in real time. Thus, described embodiments may assign a score to each received I/O. The assigned score may depend upon properties of the I/O and on system conditions at the time the I/O is received. The classification threshold dynamically determines a mixture between snapshot replication and continuous replication over operating time of the storage system.
As described herein, in some embodiments, a higher score for an I/O may indicate that the I/O is more suitable for continuous replication and a lower score for an I/O may indicate that the I/O is more suitable for snapshot replication. However, other embodiments may employ a lower score for an I/O to indicate that the I/O is more suitable for continuous replication and a higher score for an I/O to indicate that the I/O is more suitable for snapshot replication. In embodiments, scores may be normalized within a specific range such as 0 to 1 or 0 to 100, for example. It is understood that any suitable range can be used and that other embodiments may use other scoring arrangements.
As described, various properties may affect the score of each I/O and/or the classification threshold. For example, such properties may include I/O compressibility, I/O deduplication ability, I/O alignment to the storage volume, bandwidth balancing, priority of the I/O (or priority of a consistency group associated with the I/O), work load of the production volume, work load of the replication volume, storage space availability of the replication volume, and usage patterns of the production volume. In embodiments, the replication system may perform analytics on the raw data of the incoming throughput and/or on the characteristics of the environment, such as back-end storage array capabilities and communication channels bandwidth, to adapt the values for the 10 scores and thresholds as described herein, for example, by following rules in order to achieve desirable replication results, e.g., low RPO with low WAN utilization, low read overhead on the production array, and the like.
Referring to
At block 408, if the score(s) assigned to a given I/O indicate that the given I/O is suitable for snapshot replication, then the given I/O may be replicated by snapshot replication at block 410. As described herein, one type of scoring may increase a score for an I/O the more suitable the I/O is for continuous replication, although other types of scoring may be employed. Thus, in some embodiments, at block 408, if the score(s) are not above associated threshold(s), then the given I/O may be replicated by snapshot replication at block 410. Process 400 may return to block 404 to process subsequently received I/Os. At block 408, if the score(s) assigned to a given I/O indicate that the given I/O is suitable for continuous replication, then the given I/O may be replicated by continuous replication at block 412. As described herein, one type of scoring may increase a score for an I/O the more suitable the I/O is for continuous replication, although other types of scoring may be employed. Thus, in some embodiments, at block 408, if the score(s) are above associated threshold(s), then the given I/O may be replicated by continuous replication at block 412. However, other embodiments may use a lower score (e.g., below a threshold) to determine to perform continuous replication and a higher score (e.g., at or above a threshold) to determine to perform snapshot replication. Some embodiments may employ a plurality of scores and thresholds. Process 400 may return to block 404 to process subsequently received I/Os.
Referring to
Referring to
At block 606, the deduplication ability of each received I/O may be determined. I/O deduplication ability may be determined by the replication appliance (e.g., DPA 112) checking whether a given I/O is a duplicate of prior I/O(s). If the I/O is a duplicate, sending the I/O to replication volume consumes little WAN bandwidth as only a reference to the content is sent. Block 606 is described in greater detail in regard to
At block 608, the alignment of the I/O to segments of the storage volume may be determined. For example, storage volumes, such as storage 108 and 120 of
At block 610, a priority level associated with the I/O may be determined. For example, some embodiments may determine a consistency group (CG) priority in assigning a score to an I/O. As described, a CG may be a group of production volumes and replication volumes that are treated as a single logical entity. Each I/O may be associated with a given CG (e.g., based upon an address of a write request, etc.). Some CGs may be assigned a priority level, for example because a given CG contains more critical data. Block 610 is described in greater detail in regard to
Referring to
At block 706, a processor usage (e.g., of a processor of host 104 and/or host 116) may be determined. The processor usage may be determined as, for example, a percentage value of total available processor capacity (e.g., between 0%, where the processor is idle, and 100%, where the processor is completely loaded).
At block 708, usage patterns of the storage volume may be determined. For example, some embodiments may determine how frequently given regions of the storage volume(s) are written. For example, some embodiments may determine how frequently volume regions are written similarly as described in U.S. patent application Ser. No. 15/274,362 entitled “Hybrid Continuous and Snapshot Replication in a Storage System” filed on Sep. 23, 2016 and U.S. patent application Ser. No. 15/275,677 entitled “Multilevel Snapshot Replication for Hot and Cold Regions of a Storage System” filed on Sep. 26, 2016, both of which are hereby incorporated by reference herein. Block 708 is described in greater detail in regard to
Referring to
At block 810, if the I/O is a duplicate, or has deduplication ability, for example as determined at block 606 of
At block 816, if the I/O is aligned to storage segments of the storage volume(s), for example as determined at block 608 of
Referring to
At block 910, if there is not excess processor capacity, e.g., production side processor capacity, for example as determined at block 706 of
At block 916, bandwidth balancing may be performed. Block 916 is described in greater detail in regard to
Referring to
Predictive compression may be employed to determine whether a compression operation will reach one or more compression thresholds, and may stop performing compression if the compression operation is not likely to successfully reach at least one of the compression thresholds (e.g., if the size of a given data set is unlikely to reach a compression threshold after performing the compression operation). Predictive compression may reduce system resource consumption (e.g., processing capacity to perform compression) by reducing an amount of time spent performing compression operations on data sets that are uncompressible or cannot be sufficiently compressed.
As described, some embodiments may provide predictive compression that may predict whether a compression operation on a given set of I/O data is unlikely to successfully compress the given set of payload data beyond a compression threshold. For example, one manner of determining whether an I/O is compressible is to compress a first portion of the data and determine how much the data was compressed. As shown in
As an example, if the I/O data is 16 KB and a first compression threshold is 8 KB, some embodiments may perform compression on a first amount of I/O data, such as 2 KB. After processing the first amount of I/O data, a likelihood may be determined whether the compression operation will reach at least one of compression thresholds. For example, if, after processing the first 2 KB of the 8 KB I/O data, the compression operation has only reduced the 2 KB I/O data to 1.9 KB of compressed I/O data, the achieved compression ratio is 1-(1.9/2)=5%. Based upon the achieved compression ratio, it may be determined that reaching the one or more compression thresholds is unlikely. For example, achieving only a 5% compression ratio for the entire 16 KB I/O data would reduce the I/O data from 16 KB to 15.2 KB, which would not reach the first compression threshold (8 KB). Thus, this particular I/O may be determined to not be compressible, and receive a low score since a poorly compressible I/O may be better suited for snapshot replication since a poorly compressible I/O may require a significant amount of bandwidth for replication. In addition, if poorly compressible I/O is replicated with a snapshot, it may be overwritten.
In other embodiments, compression prediction may be based on the compressibility of adjacent IOs. In further embodiments, it can be assumed that the overall compressibility is uniformly distributed, so it is possible to sample the compressibility of some of the IOs in order to determine if there is a point in compressing all of the IOs. It is understood that any suitable technique for compression prediction may be used.
Referring to
Referring to
Thus, aligned I/Os (or aligned portions of I/Os) may be replicated continuously (e.g., assigned a higher score), providing low RPO and high granularity of data recovery (e.g., PITs), while unaligned I/Os (or unaligned portions of I/Os) may be replicated by snapshots (e.g., assigned a lower score).
As shown in
Although not shown in
Referring to
Thus, as described herein, when several I/Os are received concurrently by the replication appliance (e.g., DPA 112), and there are limited system resources (e.g. WAN bandwidth, processor capacity, etc.), I/Os to CGs with a higher priority (e.g., a priority level at or above a priority threshold) may be scored to be continuously replicated (e.g., given a higher score than I/Os to CGs with lower priority) since continuous replication helps to meet the RPO (e.g., data on the replication volume lags data on the production volume by only a short time period). Similarly, when there are limited system resources (e.g. WAN bandwidth, processor capacity, etc.), I/Os to CGs with a lower priority (e.g., a priority level below a priority threshold) may be scored to be snapshot replicated (e.g., given a lower score than I/Os to CGs with higher priority) since snapshot replication reduces bandwidth and processor consumption, but also increases the replication volume lag of the production volume (e.g., might not meet the RPO). In some embodiments, the priority level of each CG may be configured by a system administrator of the storage system.
Referring to
At block 1406, a region associated with a given I/O is determined. For example, in some embodiments, I/O requests (e.g., write requests) are tracked per region. For example, a number of received I/Os for each region and/or as an amount of data written per region may be monitored. At block 1408, regions may be ranked based upon the tracked I/O requests (e.g., by number of write requests and/or amount of data written to each region).
In some embodiments, at blocks 1410 and 1412, regions may be identified as hotspots or non-hotspots. For example, regions having a number of I/Os below a threshold, or a percentage of the total number of regions having the fewest relative I/Os (or the least relative data written), may be scored so as to be replicated via continuous replication (e.g., because these regions do not receive frequent write requests, continuous replication would not use much system resources). In other words, in some embodiments, at block 1410, regions having relatively few I/Os as determined at block 1408 are identified as non-hotspot regions. Similarly, at block 1412, regions having a number of I/Os at or above a threshold, or a percentage of the total number of regions having the most relative I/Os (or the most relative data written), may be scored so as to be protected via snapshot replication (e.g., because these regions receive frequent write requests, continuous replication would consume many system resources). In other words, in some embodiments, at block 1412, regions having relatively many write requests as determined at block 1408 are identified as hotspot regions.
At block 1414, some embodiments may also monitor production volume utilization in assigning a score to an I/O. For example, in some embodiments, the replication appliance (e.g., DPA 112) may monitor the utilization of the production volume (e.g., the production volume is processing many I/O requests, etc.). If production volume utilization is high, some embodiments adjust the score of given I/Os and/or adapt the score threshold to cause more I/Os to be replicated continuously and, thus, reduce the overhead of reading data from the production volume to generate snapshots during snapshot replication. At block 1416, process 708′ completes.
Referring to
As shown in
In some described embodiments, hosts 104 and 116 of
The processes described herein are not limited to use with the hardware and software of
The processes described herein are not limited to the specific embodiments described. For example, the described processes are not limited to the specific processing order shown in the figures. Rather, any of the blocks of the processes may be re-ordered, combined or removed, performed in parallel or in serial, as necessary, to achieve the results set forth herein.
Processor 1602 may be implemented by one or more programmable processors executing one or more computer programs to perform the functions of the system. As used herein, the term “processor” describes an electronic circuit that performs a function, an operation, or a sequence of operations. The function, operation, or sequence of operations may be hard coded into the electronic circuit or soft coded by way of instructions held in a memory device. A “processor” may perform the function, operation, or sequence of operations using digital values or using analog signals. In some embodiments, the “processor” can be embodied in one or more application specific integrated circuits (ASICs). In some embodiments, the “processor” may be embodied in one or more microprocessors with associated program memory. In some embodiments, the “processor” may be embodied in one or more discrete electronic circuits. The “processor” may be analog, digital or mixed-signal. In some embodiments, the “processor” may be one or more physical processors or one or more “virtual” (e.g., remotely located or “cloud”) processors.
Various functions of circuit elements may also be implemented as processing blocks in a software program. Such software may be employed in, for example, one or more digital signal processors, microcontrollers, or general purpose computers. Described embodiments may be implemented in hardware, a combination of hardware and software, software, or software in execution by one or more physical or virtual processors.
Some embodiments may be implemented in the form of methods and apparatuses for practicing those methods. Described embodiments may also be implemented in the form of program code, for example, stored in a storage medium, loaded into and/or executed by a machine, or transmitted over some transmission medium or carrier, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation. A non-transitory machine-readable medium may include but is not limited to tangible media, such as magnetic recording media including hard drives, floppy diskettes, and magnetic tape media, optical recording media including compact discs (CDs) and digital versatile discs (DVDs), solid state memory such as flash memory, hybrid magnetic and solid state memory, non-volatile memory, volatile memory, and so forth, but does not include a transitory signal per se. When embodied in a non-transitory machine-readable medium and the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the method.
When implemented on one or more processing devices, the program code segments combine with the processor to provide a unique device that operates analogously to specific logic circuits. Such processing devices may include, for example, a general-purpose microprocessor, a digital signal processor (DSP), a reduced instruction set computer (RISC), a complex instruction set computer (CISC), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a programmable logic array (PLA), a microcontroller, an embedded controller, a multi-core processor, and/or others, including combinations of one or more of the above. Described embodiments may also be implemented in the form of a bitstream or other sequence of signal values electrically or optically transmitted through a medium, stored magnetic-field variations in a magnetic recording medium, etc., generated using a method and/or an apparatus as recited in the claims.
Various elements, which are described in the context of a single embodiment, may also be provided separately or in any suitable subcombination. It will be further understood that various changes in the details, materials, and arrangements of the parts that have been described and illustrated herein may be made by those skilled in the art without departing from the scope of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
7203741 | Marco et al. | Apr 2007 | B2 |
7719443 | Natanzon | May 2010 | B1 |
7840536 | Ahal et al. | Nov 2010 | B1 |
7840662 | Natanzon | Nov 2010 | B1 |
7844856 | Ahal et al. | Nov 2010 | B1 |
7860836 | Natanzon et al. | Dec 2010 | B1 |
7882286 | Natanzon et al. | Feb 2011 | B1 |
7934262 | Natanzon et al. | Apr 2011 | B1 |
7958372 | Natanzon | Jun 2011 | B1 |
8037162 | Marco et al. | Oct 2011 | B2 |
8041940 | Natanzon et al. | Oct 2011 | B1 |
8060713 | Natanzon | Nov 2011 | B1 |
8060714 | Natanzon | Nov 2011 | B1 |
8103937 | Natanzon et al. | Jan 2012 | B1 |
8108634 | Natanzon et al. | Jan 2012 | B1 |
8214612 | Natanzon | Jul 2012 | B1 |
8250149 | Marco et al. | Aug 2012 | B2 |
8271441 | Natanzon et al. | Sep 2012 | B1 |
8271447 | Natanzon et al. | Sep 2012 | B1 |
8332687 | Natanzon et al. | Dec 2012 | B1 |
8335761 | Natanzon | Dec 2012 | B1 |
8335771 | Natanzon et al. | Dec 2012 | B1 |
8341115 | Natanzon et al. | Dec 2012 | B1 |
8370648 | Natanzon | Feb 2013 | B1 |
8380885 | Natanzon | Feb 2013 | B1 |
8392680 | Natanzon et al. | Mar 2013 | B1 |
8429362 | Natanzon et al. | Apr 2013 | B1 |
8433869 | Natanzon et al. | Apr 2013 | B1 |
8438135 | Natanzon et al. | May 2013 | B1 |
8464101 | Natanzon et al. | Jun 2013 | B1 |
8478955 | Natanzon et al. | Jul 2013 | B1 |
8495304 | Natanzon et al. | Jul 2013 | B1 |
8510279 | Natanzon et al. | Aug 2013 | B1 |
8521691 | Natanzon | Aug 2013 | B1 |
8521694 | Natanzon | Aug 2013 | B1 |
8543609 | Natanzon | Sep 2013 | B1 |
8583885 | Natanzon | Nov 2013 | B1 |
8600945 | Natanzon et al. | Dec 2013 | B1 |
8601085 | Ives et al. | Dec 2013 | B1 |
8627012 | Derbeko et al. | Jan 2014 | B1 |
8683592 | Dotan et al. | Mar 2014 | B1 |
8694700 | Natanzon et al. | Apr 2014 | B1 |
8706700 | Natanzon et al. | Apr 2014 | B1 |
8712962 | Natanzon et al. | Apr 2014 | B1 |
8719497 | Don et al. | May 2014 | B1 |
8725691 | Natanzon | May 2014 | B1 |
8725692 | Natanzon et al. | May 2014 | B1 |
8726066 | Natanzon et al. | May 2014 | B1 |
8738813 | Natanzon et al. | May 2014 | B1 |
8745004 | Natanzon et al. | Jun 2014 | B1 |
8751828 | Raizen et al. | Jun 2014 | B1 |
8769336 | Natanzon et al. | Jul 2014 | B1 |
8805786 | Natanzon | Aug 2014 | B1 |
8806161 | Natanzon | Aug 2014 | B1 |
8825848 | Dotan et al. | Sep 2014 | B1 |
8832399 | Natanzon et al. | Sep 2014 | B1 |
8850143 | Natanzon | Sep 2014 | B1 |
8850144 | Natanzon et al. | Sep 2014 | B1 |
8862546 | Natanzon et al. | Oct 2014 | B1 |
8892835 | Natanzon et al. | Nov 2014 | B1 |
8898112 | Natanzon et al. | Nov 2014 | B1 |
8898409 | Natanzon et al. | Nov 2014 | B1 |
8898515 | Natanzon | Nov 2014 | B1 |
8898519 | Natanzon et al. | Nov 2014 | B1 |
8914595 | Natanzon | Dec 2014 | B1 |
8924668 | Natanzon | Dec 2014 | B1 |
8930500 | Marco et al. | Jan 2015 | B2 |
8930947 | Derbeko et al. | Jan 2015 | B1 |
8935498 | Natanzon | Jan 2015 | B1 |
8949180 | Natanzon et al. | Feb 2015 | B1 |
8954673 | Natanzon et al. | Feb 2015 | B1 |
8954796 | Cohen et al. | Feb 2015 | B1 |
8959054 | Natanzon | Feb 2015 | B1 |
8977593 | Natanzon et al. | Mar 2015 | B1 |
8977826 | Meiri et al. | Mar 2015 | B1 |
8996460 | Frank et al. | Mar 2015 | B1 |
8996461 | Natanzon et al. | Mar 2015 | B1 |
8996827 | Natanzon | Mar 2015 | B1 |
9003138 | Natanzon et al. | Apr 2015 | B1 |
9026696 | Natanzon et al. | May 2015 | B1 |
9031913 | Natanzon | May 2015 | B1 |
9032160 | Natanzon et al. | May 2015 | B1 |
9037818 | Natanzon et al. | May 2015 | B1 |
9063994 | Natanzon et al. | Jun 2015 | B1 |
9069479 | Natanzon | Jun 2015 | B1 |
9069709 | Natanzon et al. | Jun 2015 | B1 |
9081754 | Natanzon et al. | Jul 2015 | B1 |
9081842 | Natanzon et al. | Jul 2015 | B1 |
9087008 | Natanzon | Jul 2015 | B1 |
9087112 | Natanzon et al. | Jul 2015 | B1 |
9104529 | Derbeko et al. | Aug 2015 | B1 |
9110914 | Frank et al. | Aug 2015 | B1 |
9116811 | Derbeko et al. | Aug 2015 | B1 |
9128628 | Natanzon et al. | Sep 2015 | B1 |
9128855 | Natanzon et al. | Sep 2015 | B1 |
9134914 | Derbeko et al. | Sep 2015 | B1 |
9135119 | Natanzon et al. | Sep 2015 | B1 |
9135120 | Natanzon | Sep 2015 | B1 |
9146878 | Cohen et al. | Sep 2015 | B1 |
9152339 | Cohen et al. | Oct 2015 | B1 |
9152578 | Saad et al. | Oct 2015 | B1 |
9152814 | Natanzon | Oct 2015 | B1 |
9158578 | Derbeko et al. | Oct 2015 | B1 |
9158630 | Natanzon | Oct 2015 | B1 |
9160526 | Raizen et al. | Oct 2015 | B1 |
9177670 | Derbeko et al. | Nov 2015 | B1 |
9189339 | Cohen et al. | Nov 2015 | B1 |
9189341 | Natanzon et al. | Nov 2015 | B1 |
9201736 | Moore et al. | Dec 2015 | B1 |
9223659 | Natanzon et al. | Dec 2015 | B1 |
9225529 | Natanzon et al. | Dec 2015 | B1 |
9235481 | Natanzon et al. | Jan 2016 | B1 |
9235524 | Derbeko et al. | Jan 2016 | B1 |
9235632 | Natanzon | Jan 2016 | B1 |
9244997 | Natanzon et al. | Jan 2016 | B1 |
9256605 | Natanzon | Feb 2016 | B1 |
9274718 | Natanzon et al. | Mar 2016 | B1 |
9275063 | Natanzon | Mar 2016 | B1 |
9286052 | Solan et al. | Mar 2016 | B1 |
9305009 | Bono et al. | Apr 2016 | B1 |
9323750 | Natanzon et al. | Apr 2016 | B2 |
9330155 | Bono et al. | May 2016 | B1 |
9336094 | Wolfson et al. | May 2016 | B1 |
9336230 | Natanzon | May 2016 | B1 |
9367260 | Natanzon | Jun 2016 | B1 |
9378096 | Erel et al. | Jun 2016 | B1 |
9378219 | Bono et al. | Jun 2016 | B1 |
9378261 | Bono et al. | Jun 2016 | B1 |
9383937 | Frank et al. | Jul 2016 | B1 |
9389800 | Natanzon et al. | Jul 2016 | B1 |
9405481 | Cohen et al. | Aug 2016 | B1 |
9405684 | Derbeko et al. | Aug 2016 | B1 |
9405765 | Natanzon | Aug 2016 | B1 |
9411535 | Shemer et al. | Aug 2016 | B1 |
9459804 | Natanzon et al. | Oct 2016 | B1 |
9460028 | Raizen et al. | Oct 2016 | B1 |
9471579 | Natanzon | Oct 2016 | B1 |
9477407 | Marshak et al. | Oct 2016 | B1 |
9501542 | Natanzon | Nov 2016 | B1 |
9507732 | Natanzon et al. | Nov 2016 | B1 |
9507845 | Natanzon et al. | Nov 2016 | B1 |
9514138 | Natanzon et al. | Dec 2016 | B1 |
9524218 | Veprinsky et al. | Dec 2016 | B1 |
9529885 | Natanzon et al. | Dec 2016 | B1 |
9535800 | Natanzon et al. | Jan 2017 | B1 |
9535801 | Natanzon et al. | Jan 2017 | B1 |
9547459 | BenHanokh et al. | Jan 2017 | B1 |
9547591 | Natanzon et al. | Jan 2017 | B1 |
9552405 | Moore et al. | Jan 2017 | B1 |
9557921 | Cohen et al. | Jan 2017 | B1 |
9557925 | Natanzon | Jan 2017 | B1 |
9563517 | Natanzon et al. | Feb 2017 | B1 |
9563684 | Natanzon et al. | Feb 2017 | B1 |
9575851 | Natanzon et al. | Feb 2017 | B1 |
9575857 | Natanzon | Feb 2017 | B1 |
9575894 | Natanzon et al. | Feb 2017 | B1 |
9582382 | Natanzon et al. | Feb 2017 | B1 |
9588703 | Natanzon et al. | Mar 2017 | B1 |
9588847 | Natanzon et al. | Mar 2017 | B1 |
9594822 | Natanzon et al. | Mar 2017 | B1 |
9600377 | Cohen et al. | Mar 2017 | B1 |
9619543 | Natanzon et al. | Apr 2017 | B1 |
9632881 | Natanzon | Apr 2017 | B1 |
9665305 | Natanzon et al. | May 2017 | B1 |
9710177 | Natanzon | Jul 2017 | B1 |
9720618 | Panidis et al. | Aug 2017 | B1 |
9722788 | Natanzon et al. | Aug 2017 | B1 |
9727429 | Moore et al. | Aug 2017 | B1 |
9733969 | Derbeko et al. | Aug 2017 | B2 |
9737111 | Lustik | Aug 2017 | B2 |
9740572 | Natanzon et al. | Aug 2017 | B1 |
9740573 | Natanzon | Aug 2017 | B1 |
9740880 | Natanzon et al. | Aug 2017 | B1 |
9749300 | Cale et al. | Aug 2017 | B1 |
9772789 | Natanzon et al. | Sep 2017 | B1 |
9798472 | Natanzon et al. | Oct 2017 | B1 |
9798490 | Natanzon | Oct 2017 | B1 |
9804934 | Natanzon et al. | Oct 2017 | B1 |
9811431 | Natanzon et al. | Nov 2017 | B1 |
9823865 | Natanzon et al. | Nov 2017 | B1 |
9823973 | Natanzon | Nov 2017 | B1 |
9832261 | Don et al. | Nov 2017 | B2 |
9846698 | Panidis et al. | Dec 2017 | B1 |
9875042 | Natanzon et al. | Jan 2018 | B1 |
9875162 | Panidis et al. | Jan 2018 | B1 |
9880777 | Bono et al. | Jan 2018 | B1 |
9881014 | Bono et al. | Jan 2018 | B1 |
9910620 | Veprinsky et al. | Mar 2018 | B1 |
9910621 | Golan et al. | Mar 2018 | B1 |
9910735 | Natanzon | Mar 2018 | B1 |
9910739 | Natanzon et al. | Mar 2018 | B1 |
9917854 | Natanzon et al. | Mar 2018 | B2 |
9921955 | Derbeko et al. | Mar 2018 | B1 |
9933957 | Cohen et al. | Apr 2018 | B1 |
9934302 | Cohen et al. | Apr 2018 | B1 |
9940205 | Natanzon | Apr 2018 | B2 |
9940460 | Derbeko et al. | Apr 2018 | B1 |
9946649 | Natanzon et al. | Apr 2018 | B1 |
9959061 | Natanzon et al. | May 2018 | B1 |
9965306 | Natanzon et al. | May 2018 | B1 |
9990256 | Natanzon | Jun 2018 | B1 |
9996539 | Natanzon | Jun 2018 | B1 |
10002173 | Ramachandran | Jun 2018 | B1 |
10007626 | Saad et al. | Jun 2018 | B1 |
10019194 | Baruch et al. | Jul 2018 | B1 |
10025931 | Natanzon et al. | Jul 2018 | B1 |
10031675 | Veprinsky et al. | Jul 2018 | B1 |
10031690 | Panidis et al. | Jul 2018 | B1 |
10031692 | Elron et al. | Jul 2018 | B2 |
10031703 | Natanzon et al. | Jul 2018 | B1 |
10037251 | Bono et al. | Jul 2018 | B1 |
10042579 | Natanzon | Aug 2018 | B1 |
10042751 | Veprinsky et al. | Aug 2018 | B1 |
10055146 | Natanzon et al. | Aug 2018 | B1 |
10055148 | Natanzon et al. | Aug 2018 | B1 |
10061666 | Natanzon et al. | Aug 2018 | B1 |
10067694 | Natanzon et al. | Sep 2018 | B1 |
10067837 | Natanzon et al. | Sep 2018 | B1 |
10078459 | Natanzon et al. | Sep 2018 | B1 |
10082980 | Cohen et al. | Sep 2018 | B1 |
10083093 | Natanzon et al. | Sep 2018 | B1 |
10095489 | Lieberman et al. | Oct 2018 | B1 |
10101943 | Ayzenberg et al. | Oct 2018 | B1 |
20050172092 | Lam | Aug 2005 | A1 |
20060195666 | Maruyama | Aug 2006 | A1 |
20090055593 | Satoyama | Feb 2009 | A1 |
20090313311 | Hoffmann | Dec 2009 | A1 |
20130103893 | Lee et al. | Apr 2013 | A1 |
20140195640 | Kaiser et al. | Jul 2014 | A1 |
20150379107 | Rank | Dec 2015 | A1 |
20160139836 | Nallathambi et al. | May 2016 | A1 |
20180143774 | Carson | May 2018 | A1 |
Entry |
---|
Response to U.S. Non-Final Office Action dated Apr. 9, 2018 for U.S. Appl. No. 15/274,362; Response filed Jun. 26, 2018; 13 pages. |
U.S. Non-Final Office Action dated Apr. 9, 2018 for U.S. Appl. No. 15/274,362; 14 pages. |
EMC Corporation, “EMC Recoverpoint/EX;” Applied Technology; White Paper; Apr. 2012; 17 Pages. |
Final Office Action dated Nov. 2, 2018 for U.S. Appl. No. 15/274,362; 21 Pages. |
RCE and Response to Final Office Action dated Nov. 2, 2018 for U.S. Appl. No. 15/274,362, filed Nov. 26, 2018; 14 Pages. |
U.S. Non-Final Office Action dated Apr. 9, 2018 for U.S. Appl. No. 15/274,362; 15 Pages. |
U.S. Appl. No. 15/274,362, filed Sep. 23, 2016, Baruch, et al. |