This application relates at least to generally relate to devices, systems, and methods usable with data storage in computer systems. More particularly, this application relates at least to using data analytics to predict data center disasters to enable proactive data protection.
Computer data is vital to today's organizations and a significant part of protection against disasters is focused on data protection. As solid-state memory has advanced to the point where cost of memory has become a relatively insignificant factor, organizations can afford to operate with systems that store and process terabytes of data. One example of a data protection system is a distributed storage system. A distributed storage system may include a plurality of storage devices (e.g., storage arrays) to provide data storage to a plurality of nodes. The plurality of storage devices and the plurality of nodes may be situated in the same physical location, or in one or more physically remote locations. A distributed storage system may include data protection systems that back up production site data by replicating production site data on a secondary backup storage system. The production site data may be replicated on a periodic basis and/or may be replicated as changes are made to the production site data. Some existing data protection systems may provide continuous data protection, meaning that every change made to data is backed up. The backup storage system may be situated in the same physical location as the production storage system, or in a physically remote location.
This Summary is provided to introduce a selection of concepts in a simplified form, to provide a basic understanding of one or more embodiments that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In some embodiments, a computer-implemented method is provided. First information is received from at least a first data source. Based at least in part on analysis of the received first information, a determination of a first risk of a first adverse event is made, the risk affecting a first entity associated with a first location. Based at least in part on the first risk, at least a first impact from the first adverse event on the first entity is determined. At least a first action is dynamically caused to occur before completion of the first adverse, where the first action is configured to substantially mitigate the first impact.
In another aspect, a system is provided, comprising a processor and a memory. The memory is in operable communication with the processor. The memory stores computer program code that when executed on the processor causes the processor to perform operations. The processor receives first information from at least a first data source and determines, based at least in part on analysis of the received first information, a first risk of a first adverse event, the risk affecting a first entity associated with a first location. The processor determines, based at least in part on the first risk, at least a first impact from the first adverse event on the first entity. The processor dynamically causes at least a first action to occur before completion of the first adverse, the first action configured to substantially mitigate the first impact.
Details relating to this and other embodiments are described more fully herein.
Objects, aspects, features, and advantages of embodiments disclosed herein will become more fully apparent from the following detailed description, the appended claims, and the accompanying drawings in which like reference numerals identify similar or identical elements. Reference numerals that are introduced in the specification in association with a drawing figure may be repeated in one or more subsequent figures without additional description in the specification in order to provide context for other features. For clarity, not every element may be labeled in every figure. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments, principles, and concepts. The drawings are not meant to limit the scope of the claims included herewith.
At least some embodiments of the concepts, structures, and techniques sought to be protected herein are described below with reference to a data storage system in the form of a storage system configured to store files, but it should be understood that the principles of the concepts, structures, and techniques sought to be protected herein are not limited to this configuration. Rather, they are applicable at least to any entity capable of storing and handling various types of objects, in analog, digital, or other form. Although terms such as document, file, object, etc. may be used by way of example, the principles of the described embodiment are not limited to any particular form of representing and storing data or other information; rather, they are equally applicable at least to any object capable of representing information.
Before describing embodiments of the concepts, structures, and techniques sought to be protected herein, some terms are explained. In some embodiments, the term “I/O request” or simply “I/O” may be used to refer to an input or output request. In some embodiments, an I/O request may refer to a data read or data write request. In some embodiments, the term “storage system” may encompass physical computing systems, cloud or virtual computing systems, or a combination thereof. In some embodiments, the term “storage device” may refer to any non-volatile memory (NVM) device, including hard disk drives (HDDs), solid state drivers (SSDs), flash devices (e.g., NAND flash devices), and similar devices that may be accessed locally and/or remotely (e.g., via a storage attached network (SAN)). In some embodiments, the term “storage device” may also refer to a storage array including multiple storage devices.
The following additional list may be helpful in understanding the specification and claims:
In certain embodiments, a backup site—may be a facility where replicated production site data is stored; the backup site may be located in a remote site or at the same location as the production site; a backup site may be a virtual or physical site. In certain embodiments, a back-up site may be an object store.
In certain embodiments, a data center—may be a large group of networked computer servers typically used by organizations for the remote storage, processing, or distribution of large amounts of data.
In certain embodiments, a DPA—may be a Data Protection Appliance, a computer or a cluster of computers, or a set of processes that serve as a data protection appliance, responsible for data protection services including inter alia data replication of a storage system, and journaling of I/O requests issued by a host computer to the storage system. The DPA may be a physical device, a virtual device running, or may be a combination of a virtual and physical device.
In certain embodiments, a RPA—may be replication protection appliance, which may be used interchangeable for and is another name for DPA. In certain embodiments, a RPA may be a virtual DPA or a physical DPA.
In certain embodiments, a host—may be at least one computer or networks of computers that runs at least one data processing application that issues I/O requests to one or more storage systems; a host is an initiator with a SAN; a host may be a virtual machine.
In certain embodiments, a host device—may be an internal interface in a host, to a logical storage unit.
In certain embodiments, an image—may be a copy of a logical storage unit at a specific point in time;
In certain embodiments, an initiator—may be a node in a SAN (Storage Area Network) that issues I/O requests;
In certain embodiments, a journal—may be a record of write transactions issued to a storage system; used to maintain a duplicate storage system, and to rollback the duplicate storage system to a previous point in time;
In certain embodiments, a logical unit—may be a logical entity provided by a storage system for accessing data from the storage system. As used herein a logical unit is used interchangeably with a logical volume;
In certain embodiments, a LUN—may be a logical unit number for identifying a logical unit; may also refer to one or more virtual disks or virtual LUNs, which may correspond to one or more Virtual Machines. As used herein, LUN and LU may be used interchangeably to refer to a LU.
In certain embodiments, management and deployment tools—may provide the means to deploy, control and manage the RP solution through the virtual environment management tools.
In certain embodiments, a physical storage unit—may be a physical entity, such as a disk or an array of disks, for storing data in storage locations that can be accessed by address, where physical storage unit is used interchangeably with physical volume.
In certain embodiments, a production site—may be a facility where one or more host computers run data processing applications that write data to a storage system and read data from the storage system; may be a virtual or physical site.
In certain embodiments, a SAN—may be a storage area network of nodes that send and receive I/O and other requests, each node in the network being an initiator or a target, or both an initiator and a target.
In certain embodiments, a source side—may be a transmitter of data within a data replication workflow, during normal operation a production site is the source side; and during data recovery a backup site is the source side; may be a virtual or physical site.
In certain embodiments, a snapshot—may refer to differential representations of an image, i.e. the snapshot may have pointers to the original volume, and may point to log volumes for changed locations. Snapshots may be combined into a snapshot array, which may represent different images over a time period.
In certain embodiments, a storage system—may be a SAN entity that provides multiple logical units for access by multiple SAN initiators.
In certain embodiments, a target—may be a node in a SAN that replies to I/O requests.
In certain embodiments, a target side—may be a receiver of data within a data replication workflow; during normal operation a back site is the target side, and during data recovery a production site is the target side; may be a virtual or physical site; a target site may be referred to herein as a replication site.
In certain embodiments, a WAN—may be a wide area network that connects local networks and enables them to communicate with one another, such as the Internet.
In certain embodiments, a virtual volume—may be a volume which is exposed to host by a virtualization layer, the virtual volume may be spanned across more than one site and or volumes.
In certain embodiments, a volume—may be an identifiable unit of data storage, either physical or virtual; that is, a volume can be a removable hard disk, but is not limited as being a unit that can be physically removed from a computer or storage system.
In certain embodiments, a VASA: may be a set of vCenter providers that allow an administrator to manage storage, or may include vSphere Storage application program interfaces (APIs) for Storage Awareness.
In certain embodiments, a VMFS: may be a virtual machine file system, a file system provided by VMware for storing a virtual machine.
In certain embodiments, a VMDK: may be a virtual machine disk file containing a disk data in a VMFS. Analog to a LUN in a block storage array.
In certain embodiments, a Virtual RPA (vRPAY Virtual DPA (vDPA): may be a DPA running in a VM or may be a virtualized appliance.
In certain embodiments, CDP: Continuous Data Protection, may refer to a full replica of a volume or a set of volumes along with a journal which allows any point in time access, the CDP copy is at the same site, and maybe the same storage army of the production site.
In certain embodiments, CRR: Continuous Remote Replica may refer to a full replica of a volume or a set of volumes along with a journal which allows any point in time access at a site remote to the production volume and on a separate storage array.
Referring to the example embodiment shown in
In certain embodiments, Site I 100a and Site II 100b may be remote from one another. In other embodiments, Site I 100a and Site II 100b may be local to one another and may be connected via a local area network (IAN). In some embodiments, local data protection may have the advantage of minimizing data lag between target and source, and remote data protection may have the advantage of being robust in the event that a disaster occurs at the source site.
The production site and the backup site may be remote from one another, or they may both be situated at a common site, local to one another. Local data protection has the advantage of minimizing data lag between target and source, and remote data protection has the advantage is being robust in the event that a disaster occurs at the source side.
In particular embodiments, data protection system 100 may include a failover mode of operation, wherein the direction of replicated data flow is reversed. In particular, in some embodiments, Site I 100a may behave as a target site and Site II 100b may behave as a source site. In some embodiments, failover may be triggered manually (e.g., by a user) or automatically. In some embodiments, failover may be performed in the event of a disaster at Site I 100a. In some embodiments, especially as described further herein, failover may be performed automatically and/or manually in advance of or in anticipation of a disaster or adverse event at any site, such as Site I 100a. In some embodiments, failover may be performed automatically and/or manually at any time prior to completion of an adverse event (e.g., in advance of the adverse event, substantially contemporaneously with the adverse event, at the same time as at least a portion of the adverse event, etc.). In some embodiments, especially as described further herein, after failover, restoration of operations (e.g., failback) can occur to bring the system 100 back to a condition wherein Site I 100a is back to being a source site and Site II 100b is back to being a target site. In some embodiments, both Site I 100a and Site II 100b may behave as source site for some stored data and may behave simultaneously as a target site for other stored data In certain embodiments, a portion of stored data may be replicated from one site to the other, and another portion may not be replicated.
In some embodiments, Site I 100a corresponds to a production site (e.g., a facility where one or more hosts nm data processing applications that write data to a storage system and read data from the storage system) and Site II 100b corresponds to a backup or replica site (e.g., a facility where replicated production site data is stored). Thus, in some embodiments, Site II 100b may be responsible for replicating production site data and may enable rollback of data of Site I 100a to an earlier point in time. In some embodiments, rollback may be used in the event of data corruption or a disaster, or alternatively in order to view or to access data from an earlier point in time.
Some described embodiments of Site I 100a may include a source host 104, a source storage system (or “storage array”) 108, and a source data protection appliance (DPA) 112 coupled via a first storage area network (SAN). Similarly, in some embodiments, Site II 100b may include a target host 116, a target storage system 120, and a target DPA 124 coupled via a second SAN. In some embodiments, each SAN may include one or more devices (or “nodes”) that may be designated an “initiator,” a “target”, or both. For example, in some embodiments, the first SAN may include a first fiber channel switch 148 and the second SAN may include a second fiber channel switch 168. In some embodiments, communication links between each host 104 and 116 and its corresponding storage system 108 and 120 may be any appropriate medium suitable for data transfer, such as fiber communication channel links. In some embodiments, a host communicates with its corresponding storage system over a communication link, such as an InfiniBand (IB) link or Fibre Channel (FC) link, and/or a network, such as an Ethernet or Internet (e.g., TCP/IP) network that may employ, for example, the iSCSI protocol.
In some embodiments, each storage system 108 and 120 may include storage devices for storing data, such as disks or arrays of disks, each of which may include a plurality of volumes. Typically, storage systems 108 and 120 may be target nodes. In some embodiments, in order to enable initiators to send requests to storage system 108, storage system 108 may provide (e.g., expose) one or more logical units (LU) to which commands are issued. Thus, in some embodiments, storage systems 108 and 120 may be SAN entities that provide multiple logical units for access by multiple SAN initiators. In some embodiments, an LU is a logical entity (e.g., a logical volume) provided by a storage system for accessing data stored therein. In some embodiments, a logical unit may be a physical logical unit or a virtual logical unit. In some embodiments, a logical unit may be identified by a unique logical unit number (LUN).
In the embodiment shown in
As shown in
In some embodiments, source host 104 may act as a SAN initiator that issues I/O requests through host device 140 to LU A 136 using, for example, SCSI commands. In some embodiments, such requests may be transmitted to LU A 136 with an address that includes a specific device identifier, an offset within the device, and a data size.
In some embodiments, source DPA 112 and target DPA 124 may perform various data protection services, such as data replication of a storage system, and journaling of I/O requests issued by hosts 104 and/or 116. When acting as a target DPA, a DPA may also enable rollback of data to an earlier point-in-time (PIT), and enable processing of rolled back data at the target site. In some embodiments, each DPA 112 and 124 may be a physical device, a virtual device, or may be a combination of a virtual and physical device.
In some embodiments, a DPA may be a cluster of such computers. In some embodiments, use of a cluster may ensure that if a DPA computer is down, then the DPA functionality switches over to another computer. In some embodiments, the DPA computers within a DPA cluster may communicate with one another using at least one communication link suitable for data transfer, for example, an InfiniBand (IB) link, a Fibre Channel (PC) link, and/or a network link, such as an Ethernet or Internet (e.g., TCP/IP) link to transfer data via fiber channel or IP based protocols, or other such transfer protocols. In some embodiments, one computer from the DPA cluster may serve as the DPA leader. In some embodiments, the DPA cluster leader may coordinate between the computers in the cluster, and may also perform other tasks that require coordination between the computers, such as load balancing.
In certain embodiments, a DPA may be a standalone device integrated within a SAN. Alternatively, in some embodiments, a DPA may be integrated into storage system. In some embodiments, the DPAs communicate with their respective hosts through communication links suitable for data transfer, for example, an InfiniBand (IB) link, a Fibre Channel (FC) link, and/or a network link, such as an Ethernet or Internet (e.g., TCP/IP) link to transfer data via, for example, SCSI commands or any other protocol.
In various embodiments, the DPAs may act as initiators in the SAN. For example, the DPAs may issue I/O requests using, for example, SCSI commands, to access LUs on their respective storage systems. In some embodiments, each DPA may also be configured with the necessary functionality to act as targets, e.g., to reply to I/O requests, such as SCSI commands, issued by other initiators in the SAN, including their respective hosts. In some embodiments, being target nodes, the DPAs may dynamically expose or remove one or more LUs. As described herein, in some embodiments, Site I 100a and Site II 100b may each behave simultaneously as a production site and a backup site for different logical units. As such, in some embodiments, DPA 112 and DPA 124 may each behave as a source DPA for some LUs and as a target DPA for other LUs, at the same time.
In the example embodiment shown in
In some embodiments, a protection agent may change its behavior for handling SCSI commands, for example as a result of an instruction received from the DPA. For example, in some embodiments, the behavior of a protection agent for a certain host device may depend on the behavior of its associated DPA with respect to the LU of the host device. In some embodiments, when a DPA behaves as a source site DPA for a certain LU, then during normal course of operation, the associated protection agent may split I/O requests issued by a host to the host device corresponding to that LU. Similarly, in some embodiments, when a DPA behaves as a target device for a certain LU, then during normal course of operation, the associated protection agent fails I/O requests issued by host to the host device corresponding to that LU.
In some embodiments, communication between protection agents 144 and 164 and a respective DPA 112 and 124 may use any protocol suitable for data transfer within a SAN, such as fiber channel, SCSI over fiber channel, or other protocols. In some embodiments, the communication may be direct, or via a logical unit exposed by the DPA.
In certain embodiments, protection agents may be drivers located in their respective hosts. Alternatively, in some embodiments, a protection agent may also be located in a fiber channel switch, or in any other device situated in a data path between a host and a storage system or on the storage system itself. In some embodiments, in a virtualized environment, the protection agent may run at the hypervisor layer or in a virtual machine providing a virtualization layer.
As shown in the example embodiment shown in
Some embodiments of data protection system 100 may be provided as physical systems for the replication of physical LUs, or as virtual systems for the replication of virtual LUs. For example, in one embodiment, a hypervisor may consume LUs and may generate a distributed file system on the logical units such as Virtual Machine File System (VMFS) that may generate files in the file system and expose the files as LUs to the virtual machines (each virtual machine disk is seen as a SCSI device by virtual hosts). In another embodiment, a hypervisor may consume a network based file system and exposes files in the Network File System (NFS) as SCSI devices to virtual hosts.
In some embodiments, in normal operation (sometimes referred to as “production mode”), DPA 112 may act as a source DPA for LU A 136. This, in some embodiments, protection agent 144 may act as a source protection agent, specifically by splitting I/O requests to host device 140 (“Device A”). In some embodiments, protection agent 144 may send an I/O request to source DPA 112 and, after receiving an acknowledgement from source DPA 112, may send the I/O request to LU A 136. In some embodiments, after receiving an acknowledgement from storage system 108, host 104 may acknowledge that the I/O request has successfully completed.
In some embodiments, when source DPA 112 receives a replicated I/O request from protection agent 144, source DPA 112 may transmit certain I/O information characterizing the write request, packaged as a “write transaction”, over WAN 128 to target DPA 124 for journaling and for incorporation within target storage system 120. In some embodiments, when applying write operations to storage system 120, target DPA 124 may act as an initiator, and may send SCSI commands to LU B 156.
In some embodiments, source DPA 112 may send its write transactions to target DPA 124 using a variety of modes of transmission, including (i) a synchronous mode, (ii) an asynchronous mode, and (iii) a snapshot mode.
In some embodiments, in synchronous mode, source DPA 112 may send each write transaction to target DPA 124, may receive back an acknowledgement from the target DPA 124, and in turn may send an acknowledgement back to protection agent 144. In some embodiments, in synchronous mode, protection agent 144 may wait until receipt of such acknowledgement before sending the I/O request to LU 136.
In some embodiments, in asynchronous mode, source DPA 112 may send an acknowledgement to protection agent 144 upon receipt of each I/O request, before receiving an acknowledgement back from target DPA 124.
In some embodiments, in snapshot mode, source DPA 112 may receive several I/O requests and combine them into an aggregate “snapshot” or “batch” of write activity performed in the multiple I/O requests, and may send the snapshot to target DPA 124 for journaling and incorporation in target storage system 120. In some embodiments, in snapshot mode, source DPA 112 may send an acknowledgement to protection agent 144 upon receipt of each I/O request, before receiving an acknowledgement back from target DPA 124.
In some embodiments, a snapshot replica may be a differential representation of a volume. For example, the snapshot may include pointers to the original volume, and may point to log volumes for locations of the original volume that store data changed by one or more I/O requests. In some embodiments, snapshots may be combined into a snapshot array, which may represent different images over a time period (e.g., for multiple PITs).
As described herein, in some embodiments, in normal operation, LU B 156 may be used as a backup of LU A 136. As such, while data written to LU A 136 by host 104 is replicated from LU A 136 to LU B 156, target host 116 should not send I/O requests to LU B 156. In some embodiments, to prevent such I/O requests from being sent, protection agent 164 may act as a target site protection agent for host device B 160 and may fail I/O requests sent from host 116 to LU B 156 through host device B 160. In some embodiments, in a recovery mode, target DPA 124 may undo the write transactions in journal LU 176 so as to restore the target storage system 120 to an earlier state.
Referring to
Referring to both
In some embodiments, since the journal contains the “undo” information necessary to rollback storage system 120, data that was stored in specific memory locations at a specified point in time may be obtained by undoing write transactions that occurred subsequent to such point in time (PIT).
In some embodiments, each of the four streams may hold a plurality of write transaction data. In some embodiments, as write transactions are received dynamically by the target DPA, the write transactions may be recorded at the end of the DO stream and the end of the DO METADATA stream, prior to committing the transaction.
In some embodiments, a metadata stream (e.g., UNDO METADATA stream or the DO METADATA stream) and the corresponding data stream (e.g., UNDO stream or DO stream) may be kept in a single stream by interleaving metadata and data.
Some described embodiments may validate that point-in-time (PIT) data replicas (e.g., data replicated to LU B 156) are valid and usable, for example to verify that the data replicas are not corrupt due to a system error or inconsistent due to violation of write order fidelity. In some embodiments, validating data replicas can be important, for example, in data replication systems employing incremental backup where an undetected error in an earlier data replica may lead to corruption of future data replicas.
Having described a data protection system and journal history configuration in which at least some embodiments (especially those described herein in
Data intensive system installations in general and cloud-based installations in particular are susceptible to outages due to failures in infrastructure—power, network, cooling, physical premises etc. These outages can be temporary, where the system needs to restart after an event, or can result in corruption or loss of data or equipment, which may result in longer outages or complete failures. In addition, these outages can occur as a result of unplanned natural events (e.g., those resulting from weather, environmental, geological, etc.), unplanned events or conditions such as equipment failure and/or degradation, power outage and/or insufficiency; communications outage and/or insufficiency; and also as a result of human induced events, including but not limited to those that are intentionally induced (e.g., planned outages, malware, hacking, vandalism, acts of war or terrorism, riots) and unintentionally induced (e.g., vehicular accidents, construction accidents, incorrect operation or configuration of equipment, etc.).
As a result, at least some data protection products, such as those described herein, are employed to provide redundancy of computing, network and storage and help secure protection of data and/or continuous operation, even when such events occur. These products include, but are not limited to, products providing backup, replication, distributed storages, geo-located caches, active-active availability mechanisms, and redundancy in almost all components of a data system. However, at last some of these mechanisms need some management of their operations. For example, backup systems often create backups on a schedule (daily, weekly etc.), replication systems need to know when to failover or restore and so on. In event of a significant disaster (a data center that is flooded, for example) the recovery operations can be many and diverse. After or during a disaster or other significant event, data at a new (or rehabilitated location) may need to be restored from backups or replication data.
For at least some types of disasters, at the time that the disaster strikes, it is likely that downtime already occurred, and the rapid return to operations generally involves some data loss. With at last some known data protection systems, once it is learned that a disaster or other negative event has occurred or is occurring, operations to prevent data loss and/or computer downtime are done manually, reactively, and after the event has occurred, which can cause downtime and data loss. Thus, it is advantageous if a data protection system is able proactively to reduce the downtime and data loss of a data center on a disaster event by implementing protective processes in advance of disasters or negative events. It is even more advantageous if a data system is able to predict likelihood or risk of one or more types of events such as disasters or other events that can cause data loss, so as to implement proactive strategies before a disaster or other event occurs.
As described further herein, in at least some embodiments, a statistical “big data” analytics system is used to help predict disasters and other events that could impact a data center and to adjust data protection and data center configuration accordingly. In at least some embodiments, the statistical big data analytics system scans many data resources and provides a risk value for different disasters. In at least some embodiments, using these risk values, the data center operations automatically will adjust the systems to avoid downtime, or at the very least provide the administrator and alert to allow him to adjust the system.
The above examples are not exhaustive or limiting and are provided merely as illustrative example showing the wide variety of sources of information that can be usable, either individually or in combination with other information sources, to help determine a risk of a disaster or other negative event. The information can be configured to be automatically received or retrieved and stored in a predetermined storage location, for later data analysis and/or data mining, whether at predetermined intervals, in real time, at a later time, or some combination. In some embodiments, the rate of update of information is at least partially dependent on the type of the information being updated. For example, data center status can be continuous and immediate. Weather updates can be periodic (except in the case of unforeseen emergencies, e.g., a flash flood warning, tornado warning, etc.). Alerts can be event based. In some embodiments, the information is retrieved and/or received at predetermined intervals. In some embodiments, the information is retrieved and/or received substantially continuously. In some embodiments, the data analysis and data mining occur substantially continuously, as data becomes available. In some embodiments, the data mining and data analysis occur substantially contemporaneously with when data is made available. In some embodiments, the data mining and data analysis occur some predetermined time after data becomes available. In some embodiments, the information is received and/or retrieved (block 320) in advance of the disaster or adverse event. In some embodiments, the information is received and/or retrieved (block 320) at substantially the same time that the adverse event or disaster is occurring. As will be appreciated, data mining sometimes uses databases that are too big to transfer. Instead of retrieving the data, a query/search/operation/program is sent to the data source and only the results transferred back. That means, in some embodiments, that the data acquisition method is up to the service being used, and the result is provided ad hoc.
For example, an adverse event could be already occurring at a first location, but the method of
As will be appreciated, the disclosures and embodiments made herein likewise will be usable with newly developed sources of data and information, new social media sources, new systems for broadcasting information and sending messages, new types of websites, new systems for gathering and communicating information, etc. In addition, as will be appreciated, there are many types of disasters, negative events, or potential sources of disruption to data mine. Negative events such as disasters can be of a global or wide nature like earthquakes, hurricanes and flooding or more local like fires, power outages or civil unrest, can be accidental or deliberate, and can arise from uncontrollable or controllable events, be the originating from actions or humans or not.
Referring again to
Referring briefly to
In some embodiments, a data mining algorithm is a well-defined procedure that takes data as input and produces models or patterns as output. Illustrative examples of usable data mining algorithms and products include, but are not limited to the k-means algorithm, the C4.5 algorithm, the Classification And Regression Trees (CART) algorithm, the OCl algorithm, K Nearest Neighbor (KNN) algorithm, AutoClass III algorithm, the DBMiner product, and the EMERALD data mining tools, Bayesian Belief Networks (BBNs); as well as virtually any data mining or learning algorithm or product currently known or later developed, including as well as techniques such as Support vector machines, APriori, EM, PageRank, AdaBoost, Naive Bayes, and Neural Networks. This list of usable data mining algorithms is not exhaustive, and many other algorithms are usable in accordance with at least some embodiments.
In some embodiments, the risks calculated can include either or both of quantitative risks and qualitative risks. Determining quantitative risks, in at least one embodiment, can at least relate to numerically determining probabilities of one or more various unfavorable or negative events and determining a likely extent of losses if a given event or set of events takes place. Determining qualitative risks, in at least one embodiment, relates to defining, for at least one potential threat (i.e. adverse event or disaster), the extent of vulnerability existing at a given location or set of locations, as well as if any countermeasures to the negative event or disaster exist or are possible, should such a negative event occur.
The calculation of risk of block 330, in one embodiment, can assess risks that correspond to a predetermined list of possible risks that are reasonably likely or historically possible for a given location, or to non-predetermined risks that might not have been considered likely but now are viewed to be possibly more of a concern because of newly received and data mined information. The following hypothetical examples are illustrative of some of the kinds of risks that could be predicted in accordance with at least some embodiments described herein:
Examples of adverse events which might not have been previously foreseen for a given location, but suddenly may become of more concern and risk, in at least some embodiments, include but are not limited to situations such as the following hypothetical examples:
The above scenarios are merely exemplary and not limiting, but help to illustrate the range of activities and data that can be useful to help assess not only imminent risk of an adverse event, but also to assess future risks of one or more adverse events. The data also can be analyzed to help predict the risk of some adverse events, or bring to the attention of computer administrators adverse events that had not previously been predicted, but for which changing conditions make more likely or imminent. In addition, as will be described further herein, the risk analysis techniques described herein also can be usable in developing processes to bring systems back to normal operation after an adverse event or disaster has either occurred and is over, or even if such an adverse event or disaster was predicted or expected, but did not occur.
Referring again to
Reference is now made briefly to
In addition,
In some embodiments, a computer-implemented method is provided. First information is received from at least a first data source. Based at least in part on analysis of the received first information, a determination of a first risk of a first adverse event is made, the risk affecting a first entity associated with a first location. Based at least in part on the first risk, at least a first impact from the first adverse event on the first entity is determined. At least a first action is dynamically caused to occur before completion of the first adverse event, where the first action is configured to substantially mitigate the first impact.
In another example, table 504 of
Referring again to
In some embodiments, each type of classification is associated with one or more possible responses, where the responses generally are designed to minimize data loss and/or computer system downtime for that type of event. More than one classification may exist for a disaster type, and values can be set or defined relating to the severity/length of the expected disaster—for example, is a predicted or expected power outage expected to be short (flicker, UPS exist) versus longer, will the event or disaster result in a certain level of service degradation etc.
Referring again to
In some embodiments, the dynamic and/or automatic adjustments and/or preventative measures of blocks 350-356 can result, in some embodiments, in one or more of the following types of actions which can, in some embodiments, be accomplished at least in part using management and deployment tools used with a given production site, storage system, RPA, DPA, SAN, VASA, backup site, host, production site, data storage center, etc.:
As will be appreciated, in some embodiments, the countermeasures and other adjustments and/or preventative measures being taken depend at least in part on the type of disaster or adverse event that is predicted or occurring, its duration, and what type of functionality or system is attempting to be preserved. Thus, the above examples are not exhaustive.
For predicted or imminent destruction of premises, in some embodiments, the countermeasures are used to provide alerts and warnings as early as possible to preserve human life, generate control signals configured to instruct systems to back up all data, generate instructions to move resources away from the premises, if possible (e.g., failover), disconnect resources from power to avoid electrocution and/or shock, possible encryption or destruction of sensitive data to prevent its becoming accessible to inappropriate or criminal users. For example, in some embodiments, in the situation of predicted or imminent destruction of premises, the goal is to get everything possible out of those premises: mass migration of applications to other data centers; transfer of data, copies, and replicas to other locations; and any other possible actions usable to get a given data center premises as empty as possible. As will be appreciated, options such as destruction or deletion of data is a type of a security measure, as it is assumed that in the event of significant destruction of premises, that unauthorized third parties will have physical access to some of the media, and to the extent possible it is advantageous to ensure access security. In another example, at least some embodiments as described herein can be used in combination with other types of systems that automatically protect data during disasters, as will be understood.
For predicted or imminent power outages of various durations, in accordance with some embodiments (blocks 350-356), responses include, but are not limited to: use of uninterruptible power supplies; use of backup generators, offloading information to remote data centers or targets not affected by the predicted or imminent disaster, front-loading backup and/or replication operations to make use of available time, shut-down on essential functions or equipment, maximize power management features, use a predetermined and/or dynamically created priority list to maintain power to the highest priority resources first, etc.
For predicted or imminent communications outages and/or degradation, in accordance with some embodiments, actions (blocks 350-356) could include thing like proactively using multiple redundant computer links and/or network paths to ensure critical and/or high priority data is offloaded, degrade different computer systems and/or paths based on predetermined and/or dynamically created priority lists, etc.
For predicted or imminent degradation of infrastructure services, in at least some embodiments, including but not limited to high load on central processor unit (CPU), strain on communication bandwidth, strain on HVAC systems (e.g., due to extreme heat or extreme cold), humidity issues, plumbing issues, loss of capacity due to equipment failures, and/or any other conditions causing less than full performance of existing systems, etc., actions can be taken (blocks 350-356) to optimize system performance. For example, in some embodiments, for predicted or imminent degradation of CPU/communications, actions that can be taken include, but are not limited to, shutting down lower priority systems, stopping all non-essential operations like: upgrades, maintenance operations, non-essential data transfers, backups. For predicted or imminent degradation of air-conditioning or environmental issues, actions that can be taken include, but are not limited to, slowing down CPU clocks, stopping spinning disks that are not immediately needed, shutting of switches, uniting virtual machines on fewer hypervisors and shutting the free hypervisors off.
For predicted or imminent outages due to hijacking (e.g., by competitor or terrorists) or other predicted or imminent criminal acts (e.g., potential theft of digital information), in at least some embodiments, if any knowledge about potential targets is known, actions (blocks 250-356) can include shutting down such targets and/or segregating them to be subject to minimal negative consequences during such adverse events. Additional activities and actions in accordance with blocks 350-356 could include, in some embodiments, proactively configuring systems to be ready of potential attacks, both physical and non-physical (e.g., via malware, hacking, denial of service, etc.)—such proactive configuring could include updating all virus protections, configuring systems to block communications, additional encrypting of information to block access, digitally watermarking information to help make changes apparent and to make apparent to others that data has been stolen, etc.
The following are illustrative examples of adjustments (blocks 350-356) usable in certain exemplary hypothetical scenarios, in accordance with at least some embodiments, but these are not to be construed as limiting:
a. Flush all caches.
b. Take snapshots (e.g., of storage arrays) if fast enough (e.g., in seconds or sub-seconds).
c. Live migrate to other sites (including failover sites) if possible (or use any other technique capable of allowing live migration of a running virtual machine's (VM) file system from one storage system to another, with no downtime for the VM or service disruption for end users.
d. Shorten recovery point objective (RPO) (i.e., maximum targeted period in which data might be lost from an IT service due to a major incident), where possible, such as by buffering some data and sending it in bulk.
a. Refresh backups to whatever possible and ship off premise.
b. Failover replication systems to other sites.
c. Live migrate to other sites if possible.
d. Create new copies on other sites and transfer to them
e. Prioritize the operations according to SLA, and resources available.
a. Flush caches.
b. Copy data off site.
c. Encrypt/delete data—sensitive data first.
d. Change passwords and/or encryption key
Referring again to
Referring again to decision block 360 of
In some embodiments, at decision block 375, if the outcome is YES (i.e., that the risk of an adverse event or disaster has ended, at least for some locations being monitored), then, in accordance with some embodiments, several options are possible. In some embodiments (i.e., the YES—V3 outcome at block 375), processing ends (block 380). This YES—V3 outcome can be applicable, in at least some embodiments, where the scope of the adjustments and preventative measures done in blocks 350-356 are of a nature that do not require significant reversing types of actions to restore normal operation, and/or if reversal might not be feasible. For example, if the actions done in blocks 350-356 involved actions like refreshing backups and creating extra copies, there may be little to no action needed to bring the affected devices to normal operation (indeed, some of these devices may never have deviated from normal operation). In some embodiments, however, the actions done in blocks 350-356 may necessitate some additional actions (whether performed automatically or manually) to restore one or more systems at a given location back to normal operation (e.g., if data was encrypted, or devices were shut down, or components were taken offline, etc.). This corresponds, in some embodiments, to the YES—V4 outcome at block 375.
In decision block 365 of
Referring to
In an illustrative example, suppose the disaster that occurred was a flood, and the information received at block 420 helps inform a decision or prediction that the floodwater will be sufficiently subsided within 2 hours to enable restoration of normal operations. In this example, in block 435, the automatic adjustments that can occur can include running one or more failback or rollback applications and synchronizing data from a remote location back to primary location via live migration to another site. For example, in some embodiments, the failback process can be similar to that described in the following commonly assigned U.S. Patents, each of which is hereby incorporated by reference: U.S. Pat. No. 8,898,409, entitled “JOURNAL-BASED REPLICATION WITHOUT JOURNAL LOSS”; U.S. Pat. No. 7,275,177, entitled “DATA RECOVERY WITH INTERNET PROTOCOL REPLICATION WITH OR WITHOUT FULL RESYNC”; U.S. Pat. No. 7,383,463, entitled “INTERNET PROTOCOL BASED DISASTER RECOVERY OF A SERVER”, and U.S. Pat. No. 7,827,136, entitled “MANAGEMENT FOR REPLICATION OF DATA STORED IN A DATA STORAGE ENVIRONMENT INCLUDING A SYSTEM AND METHOD FOR FAILOVER PROTECTION OF SOFTWARE AGENTS OPERATING IN THE ENVIRONMENT”. In some embodiments, the rollback process can be similar to processes described in one or more of the following commonly assigned U.S. patents, which are hereby incorporated by reference: U.S. Pat. No. 8,726,083, entitled “SYNCHRONIZED TAKING OF SNAPSHOT MEMORY IMAGES OF VIRTUAL MACHINES AND STORAGE SNAPSHOTS” and U.S. Pat. No. 8,726,066, entitled “JOURNAL BASED REPLICATION WITH ENHANCE FAILOVER”.
As will be appreciated, the methods of
Referring briefly back to
The methods of
Referring again to
Various functions of circuit elements may also be implemented as processing blocks in a software program. Such software may be employed in, for example, one or more digital signal processors, microcontrollers, or general purpose computers. Described embodiments may be implemented in hardware, a combination of hardware and software, software, or software in execution by one or more physical or virtual processors.
Some embodiments may be implemented in the form of methods and apparatuses for practicing those methods. Described embodiments may also be implemented in the form of program code, for example, stored in a storage medium, loaded into and/or executed by a machine, or transmitted over some transmission medium or carrier, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation. A non-transitory machine-readable medium may include but is not limited to tangible media, such as magnetic recording media including hard drives, floppy diskettes, and magnetic tape media, optical recording media including compact discs (CDs) and digital versatile discs (DVDs), solid state memory such as flash memory, hybrid magnetic and solid state memory, non-volatile memory, volatile memory, and so forth, but does not include a transitory signal per se. When embodied in a non-transitory machine-readable medium and the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the method.
When implemented on one or more processing devices, the program code segments combine with the processor to provide a unique device that operates analogously to specific logic circuits. Such processing devices may include, for example, a general purpose microprocessor, a digital signal processor (DSP), a reduced instruction set computer (RISC), a complex instruction set computer (CISC), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a programmable logic array (PLA), a microcontroller, an embedded controller, a multi-core processor, and/or others, including combinations of one or more of the above. Described embodiments may also be implemented in the form of a bitstream or other sequence of signal values electrically or optically transmitted through a medium, stored magnetic-field variations in a magnetic recording medium, etc., generated using a method and/or an apparatus as recited in the claims.
For example, when the program code is loaded into and executed by a machine, such as the computer of
In some embodiments, a storage medium may be a physical or logical device. In some embodiments, a storage medium may consist of physical or logical devices. In some embodiments, a storage medium may be mapped across multiple physical and/or logical devices. In some embodiments, storage medium may exist in a virtualized environment. In some embodiments, a processor may be a virtual or physical embodiment. In some embodiments, a logic may be executed across one or more physical or virtual processors.
For purposes of illustrating the present embodiment, the disclosed embodiments are described as embodied in a specific configuration and using special logical arrangements, but one skilled in the art will appreciate that the device is not limited to the specific configuration but rather only by the claims included with this specification.
Various elements, which are described in the context of a single embodiment, may also be provided separately or in any suitable subcombination. It will be further understood that various changes in the details, materials, and arrangements of the parts that have been described and illustrated herein may be made by those skilled in the art without departing from the scope of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
5889935 | Ofek | Mar 1999 | A |
6044444 | Ofek | Mar 2000 | A |
7203741 | Marco et al. | Apr 2007 | B2 |
7225250 | Harrop | May 2007 | B1 |
7383463 | Hayden et al. | Jun 2008 | B2 |
7719443 | Natanzon | May 2010 | B1 |
7840536 | Ahal et al. | Nov 2010 | B1 |
7840662 | Natanzon | Nov 2010 | B1 |
7844856 | Ahal et al. | Nov 2010 | B1 |
7860836 | Natanzon et al. | Dec 2010 | B1 |
7882286 | Natanzon et al. | Feb 2011 | B1 |
7934262 | Natanzon et al. | Apr 2011 | B1 |
7958372 | Natanzon | Jun 2011 | B1 |
8037162 | Marco et al. | Oct 2011 | B2 |
8041940 | Natanzon et al. | Oct 2011 | B1 |
8060713 | Natanzon | Nov 2011 | B1 |
8060714 | Natanzon | Nov 2011 | B1 |
8103937 | Natanzon et al. | Jan 2012 | B1 |
8108634 | Natanzon et al. | Jan 2012 | B1 |
8135135 | Biddle et al. | Mar 2012 | B2 |
8156195 | Hagglund et al. | Apr 2012 | B2 |
8214612 | Natanzon | Jul 2012 | B1 |
8250149 | Marco et al. | Aug 2012 | B2 |
8271441 | Natanzon et al. | Sep 2012 | B1 |
8271447 | Natanzon et al. | Sep 2012 | B1 |
8332687 | Natanzon et al. | Dec 2012 | B1 |
8335761 | Natanzon | Dec 2012 | B1 |
8335771 | Natanzon et al. | Dec 2012 | B1 |
8341115 | Natanzon et al. | Dec 2012 | B1 |
8370648 | Natanzon | Feb 2013 | B1 |
8380885 | Natanzon | Feb 2013 | B1 |
8392680 | Natanzon et al. | Mar 2013 | B1 |
8429362 | Natanzon et al. | Apr 2013 | B1 |
8433869 | Natanzon et al. | Apr 2013 | B1 |
8438135 | Natanzon et al. | May 2013 | B1 |
8464101 | Natanzon et al. | Jun 2013 | B1 |
8478955 | Natanzon et al. | Jul 2013 | B1 |
8495304 | Natanzon et al. | Jul 2013 | B1 |
8510279 | Natanzon et al. | Aug 2013 | B1 |
8521691 | Natanzon | Aug 2013 | B1 |
8521694 | Natanzon | Aug 2013 | B1 |
8543609 | Natanzon | Sep 2013 | B1 |
8583885 | Natanzon | Nov 2013 | B1 |
8600945 | Natanzon et al. | Dec 2013 | B1 |
8601085 | Ives et al. | Dec 2013 | B1 |
8627012 | Derbeko et al. | Jan 2014 | B1 |
8683592 | Dotan et al. | Mar 2014 | B1 |
8694700 | Natanzon et al. | Apr 2014 | B1 |
8700575 | Srinivasan et al. | Apr 2014 | B1 |
8706700 | Natanzon et al. | Apr 2014 | B1 |
8712962 | Natanzon et al. | Apr 2014 | B1 |
8719497 | Don et al. | May 2014 | B1 |
8725691 | Natanzon | May 2014 | B1 |
8725692 | Natanzon et al. | May 2014 | B1 |
8726066 | Natanzon et al. | May 2014 | B1 |
8738813 | Natanzon et al. | May 2014 | B1 |
8745004 | Natanzon et al. | Jun 2014 | B1 |
8751828 | Raizen et al. | Jun 2014 | B1 |
8769336 | Natanzon et al. | Jul 2014 | B1 |
8805786 | Natanzon | Aug 2014 | B1 |
8806161 | Natanzon | Aug 2014 | B1 |
8825848 | Dotan et al. | Sep 2014 | B1 |
8832399 | Natanzon et al. | Sep 2014 | B1 |
8850143 | Natanzon | Sep 2014 | B1 |
8850144 | Natanzon et al. | Sep 2014 | B1 |
8862546 | Natanzon et al. | Oct 2014 | B1 |
8892835 | Natanzon et al. | Nov 2014 | B1 |
8898112 | Natanzon et al. | Nov 2014 | B1 |
8898409 | Natanzon et al. | Nov 2014 | B1 |
8898507 | Crable et al. | Nov 2014 | B1 |
8898515 | Natanzon | Nov 2014 | B1 |
8898519 | Natanzon et al. | Nov 2014 | B1 |
8914595 | Natanzon | Dec 2014 | B1 |
8924668 | Natanzon | Dec 2014 | B1 |
8930500 | Marco et al. | Jan 2015 | B2 |
8930947 | Derbeko et al. | Jan 2015 | B1 |
8935498 | Natanzon | Jan 2015 | B1 |
8949180 | Natanzon et al. | Feb 2015 | B1 |
8954673 | Natanzon et al. | Feb 2015 | B1 |
8954796 | Cohen et al. | Feb 2015 | B1 |
8959054 | Natanzon | Feb 2015 | B1 |
8977593 | Natanzon et al. | Mar 2015 | B1 |
8977826 | Meiri et al. | Mar 2015 | B1 |
8996460 | Frank et al. | Mar 2015 | B1 |
8996461 | Natanzon et al. | Mar 2015 | B1 |
8996827 | Natanzon | Mar 2015 | B1 |
9003138 | Natanzon et al. | Apr 2015 | B1 |
9026696 | Natanzon et al. | May 2015 | B1 |
9031913 | Natanzon | May 2015 | B1 |
9032160 | Natanzon et al. | May 2015 | B1 |
9037818 | Natanzon et al. | May 2015 | B1 |
9063994 | Natanzon et al. | Jun 2015 | B1 |
9069479 | Natanzon | Jun 2015 | B1 |
9069709 | Natanzon et al. | Jun 2015 | B1 |
9081754 | Natanzon et al. | Jul 2015 | B1 |
9081842 | Natanzon et al. | Jul 2015 | B1 |
9087008 | Natanzon | Jul 2015 | B1 |
9087112 | Natanzon et al. | Jul 2015 | B1 |
9100282 | Raps et al. | Aug 2015 | B1 |
9104529 | Derbeko et al. | Aug 2015 | B1 |
9110914 | Frank et al. | Aug 2015 | B1 |
9116811 | Derbeko et al. | Aug 2015 | B1 |
9128628 | Natanzon et al. | Sep 2015 | B1 |
9128855 | Natanzon et al. | Sep 2015 | B1 |
9134914 | Derbeko et al. | Sep 2015 | B1 |
9135119 | Natanzon et al. | Sep 2015 | B1 |
9135120 | Natanzon | Sep 2015 | B1 |
9146878 | Cohen et al. | Sep 2015 | B1 |
9152339 | Cohen et al. | Oct 2015 | B1 |
9152578 | Saad et al. | Oct 2015 | B1 |
9152814 | Natanzon | Oct 2015 | B1 |
9158578 | Derbeko et al. | Oct 2015 | B1 |
9158630 | Natanzon | Oct 2015 | B1 |
9160526 | Raizen et al. | Oct 2015 | B1 |
9177670 | Derbeko et al. | Nov 2015 | B1 |
9189309 | Ma | Nov 2015 | B1 |
9189339 | Cohen et al. | Nov 2015 | B1 |
9189341 | Natanzon et al. | Nov 2015 | B1 |
9201736 | Moore et al. | Dec 2015 | B1 |
9218251 | Hemashekar et al. | Dec 2015 | B1 |
9223659 | Natanzon et al. | Dec 2015 | B1 |
9225529 | Natanzon et al. | Dec 2015 | B1 |
9235481 | Natanzon et al. | Jan 2016 | B1 |
9235524 | Derbeko et al. | Jan 2016 | B1 |
9235632 | Natanzon | Jan 2016 | B1 |
9244997 | Natanzon et al. | Jan 2016 | B1 |
9256605 | Natanzon | Feb 2016 | B1 |
9274718 | Natanzon et al. | Mar 2016 | B1 |
9275063 | Natanzon | Mar 2016 | B1 |
9286052 | Solan et al. | Mar 2016 | B1 |
9305009 | Bono et al. | Apr 2016 | B1 |
9323750 | Natanzon et al. | Apr 2016 | B2 |
9330155 | Bono et al. | May 2016 | B1 |
9336094 | Wolfson et al. | May 2016 | B1 |
9336230 | Natanzon | May 2016 | B1 |
9367260 | Natanzon | Jun 2016 | B1 |
9378096 | Erel et al. | Jun 2016 | B1 |
9378219 | Bono et al. | Jun 2016 | B1 |
9378261 | Bono et al. | Jun 2016 | B1 |
9383937 | Frank et al. | Jul 2016 | B1 |
9389800 | Natanzon et al. | Jul 2016 | B1 |
9405481 | Cohen et al. | Aug 2016 | B1 |
9405684 | Derbeko et al. | Aug 2016 | B1 |
9405765 | Natanzon | Aug 2016 | B1 |
9411535 | Shemer et al. | Aug 2016 | B1 |
9448544 | Slessman et al. | Sep 2016 | B2 |
9459804 | Natanzon et al. | Oct 2016 | B1 |
9460028 | Raizen et al. | Oct 2016 | B1 |
9471579 | Natanzon | Oct 2016 | B1 |
9477407 | Marshak et al. | Oct 2016 | B1 |
9501542 | Natanzon | Nov 2016 | B1 |
9507732 | Natanzon et al. | Nov 2016 | B1 |
9507845 | Natanzon et al. | Nov 2016 | B1 |
9514138 | Natanzon et al. | Dec 2016 | B1 |
9524218 | Veprinsky et al. | Dec 2016 | B1 |
9529885 | Natanzon et al. | Dec 2016 | B1 |
9535800 | Natanzon et al. | Jan 2017 | B1 |
9535801 | Natanzon et al. | Jan 2017 | B1 |
9547459 | BenHanokh et al. | Jan 2017 | B1 |
9547591 | Natanzon et al. | Jan 2017 | B1 |
9552405 | Moore et al. | Jan 2017 | B1 |
9557921 | Cohen et al. | Jan 2017 | B1 |
9557925 | Natanzon | Jan 2017 | B1 |
9563517 | Natanzon et al. | Feb 2017 | B1 |
9563684 | Natanzon et al. | Feb 2017 | B1 |
9575851 | Natanzon et al. | Feb 2017 | B1 |
9575857 | Natanzon | Feb 2017 | B1 |
9575894 | Natanzon et al. | Feb 2017 | B1 |
9582382 | Natanzon et al. | Feb 2017 | B1 |
9588703 | Natanzon et al. | Mar 2017 | B1 |
9588847 | Natanzon et al. | Mar 2017 | B1 |
9594822 | Natanzon et al. | Mar 2017 | B1 |
9600377 | Cohen et al. | Mar 2017 | B1 |
9619543 | Natanzon et al. | Apr 2017 | B1 |
9632881 | Natanzon | Apr 2017 | B1 |
9665305 | Natanzon et al. | May 2017 | B1 |
9710177 | Natanzon | Jul 2017 | B1 |
9720618 | Panidis et al. | Aug 2017 | B1 |
9722788 | Natanzon et al. | Aug 2017 | B1 |
9727429 | Moore et al. | Aug 2017 | B1 |
9733969 | Derbeko et al. | Aug 2017 | B2 |
9737111 | Lustik | Aug 2017 | B2 |
9740572 | Natanzon et al. | Aug 2017 | B1 |
9740573 | Natanzon | Aug 2017 | B1 |
9740880 | Natanzon et al. | Aug 2017 | B1 |
9749300 | Cale et al. | Aug 2017 | B1 |
9772789 | Natanzon et al. | Sep 2017 | B1 |
9798472 | Natanzon et al. | Oct 2017 | B1 |
9798490 | Natanzon | Oct 2017 | B1 |
9804934 | Natanzon et al. | Oct 2017 | B1 |
9811431 | Natanzon et al. | Nov 2017 | B1 |
9823865 | Natanzon et al. | Nov 2017 | B1 |
9823973 | Natanzon | Nov 2017 | B1 |
9832261 | Don et al. | Nov 2017 | B2 |
9846698 | Panidis et al. | Dec 2017 | B1 |
9875042 | Natanzon et al. | Jan 2018 | B1 |
9875162 | Panidis et al. | Jan 2018 | B1 |
9880777 | Bono et al. | Jan 2018 | B1 |
9881014 | Bono et al. | Jan 2018 | B1 |
9910620 | Veprinsky et al. | Mar 2018 | B1 |
9910621 | Golan et al. | Mar 2018 | B1 |
9910735 | Natanzon | Mar 2018 | B1 |
9910739 | Natanzon et al. | Mar 2018 | B1 |
9917854 | Natanzon et al. | Mar 2018 | B2 |
9921955 | Derbeko et al. | Mar 2018 | B1 |
9933957 | Cohen et al. | Apr 2018 | B1 |
9934302 | Cohen et al. | Apr 2018 | B1 |
9940205 | Natanzon | Apr 2018 | B2 |
9940460 | Derbeko et al. | Apr 2018 | B1 |
9946649 | Natanzon et al. | Apr 2018 | B1 |
9959061 | Natanzon et al. | May 2018 | B1 |
9965306 | Natanzon et al. | May 2018 | B1 |
9990256 | Natanzon | Jun 2018 | B1 |
9996539 | Natanzon | Jun 2018 | B1 |
10007626 | Saad et al. | Jun 2018 | B1 |
10019194 | Baruch et al. | Jul 2018 | B1 |
10025931 | Natanzon et al. | Jul 2018 | B1 |
10031675 | Veprinsky et al. | Jul 2018 | B1 |
10031690 | Panidis et al. | Jul 2018 | B1 |
10031692 | Elron et al. | Jul 2018 | B2 |
10031703 | Natanzon et al. | Jul 2018 | B1 |
10037251 | Bono et al. | Jul 2018 | B1 |
10042579 | Natanzon | Aug 2018 | B1 |
10042751 | Veprinsky et al. | Aug 2018 | B1 |
10048996 | Bell | Aug 2018 | B1 |
10055146 | Natanzon et al. | Aug 2018 | B1 |
10055148 | Natanzon et al. | Aug 2018 | B1 |
10061666 | Natanzon et al. | Aug 2018 | B1 |
10067694 | Natanzon et al. | Sep 2018 | B1 |
10067837 | Natanzon et al. | Sep 2018 | B1 |
10078459 | Natanzon et al. | Sep 2018 | B1 |
10082980 | Cohen et al. | Sep 2018 | B1 |
10083093 | Natanzon et al. | Sep 2018 | B1 |
10095489 | Lieberman et al. | Oct 2018 | B1 |
10101943 | Ayzenberg et al. | Oct 2018 | B1 |
10108356 | Natanzon et al. | Oct 2018 | B1 |
10108507 | Natanzon | Oct 2018 | B1 |
10108645 | Bigman et al. | Oct 2018 | B1 |
10114581 | Natanzon et al. | Oct 2018 | B1 |
10120787 | Shemer et al. | Nov 2018 | B1 |
10120925 | Natanzon et al. | Nov 2018 | B1 |
10126946 | Natanzon et al. | Nov 2018 | B1 |
10133874 | Natanzon et al. | Nov 2018 | B1 |
10140039 | Baruch et al. | Nov 2018 | B1 |
10146436 | Natanzon et al. | Dec 2018 | B1 |
10146639 | Natanzon et al. | Dec 2018 | B1 |
10146675 | Shemer et al. | Dec 2018 | B1 |
10146961 | Baruch et al. | Dec 2018 | B1 |
10148751 | Natanzon | Dec 2018 | B1 |
10152246 | Lieberman et al. | Dec 2018 | B1 |
10152267 | Ayzenberg et al. | Dec 2018 | B1 |
10152384 | Amit et al. | Dec 2018 | B1 |
10157014 | Panidis et al. | Dec 2018 | B1 |
10185583 | Natanzon et al. | Jan 2019 | B1 |
10191677 | Natanzon et al. | Jan 2019 | B1 |
10191687 | Baruch et al. | Jan 2019 | B1 |
10191755 | Natanzon et al. | Jan 2019 | B1 |
10203904 | Natanzon et al. | Feb 2019 | B1 |
10210073 | Baruch et al. | Feb 2019 | B1 |
10223007 | Natanzon et al. | Mar 2019 | B1 |
10223023 | Natanzon et al. | Mar 2019 | B1 |
10223131 | Lieberman et al. | Mar 2019 | B1 |
10229006 | Natanzon et al. | Mar 2019 | B1 |
10229056 | Panidis et al. | Mar 2019 | B1 |
10235055 | Saad et al. | Mar 2019 | B1 |
10235060 | Baruch et al. | Mar 2019 | B1 |
10235061 | Natanzon et al. | Mar 2019 | B1 |
10235064 | Natanzon et al. | Mar 2019 | B1 |
10235087 | Baruch et al. | Mar 2019 | B1 |
10235088 | Baruch et al. | Mar 2019 | B1 |
10235090 | Baruch et al. | Mar 2019 | B1 |
10235091 | Ayzenberg et al. | Mar 2019 | B1 |
10235092 | Natanzon et al. | Mar 2019 | B1 |
10235145 | Natanzon et al. | Mar 2019 | B1 |
10235196 | Natanzon et al. | Mar 2019 | B1 |
10235247 | Natanzon et al. | Mar 2019 | B1 |
10235249 | Natanzon et al. | Mar 2019 | B1 |
10235252 | Lieberman et al. | Mar 2019 | B1 |
10250679 | Natanzon et al. | Apr 2019 | B1 |
10255137 | Panidis et al. | Apr 2019 | B1 |
10255291 | Natanzon et al. | Apr 2019 | B1 |
20060053338 | Cousins | Mar 2006 | A1 |
20060168473 | Sahoo | Jul 2006 | A1 |
20060212744 | Benner | Sep 2006 | A1 |
20080189578 | Raghuraman | Aug 2008 | A1 |
20140143610 | Nakatsugawa | May 2014 | A1 |
20150195192 | Vasseur | Jul 2015 | A1 |
20150261642 | Gottlib, III | Sep 2015 | A1 |
20160034362 | Al-Wahabi | Feb 2016 | A1 |
20160253563 | Lam | Sep 2016 | A1 |
20160306691 | Aneja | Oct 2016 | A1 |
20170011298 | Pal | Jan 2017 | A1 |
20170249200 | Mustafi | Aug 2017 | A1 |
20180074887 | Braham | Mar 2018 | A1 |
Entry |
---|
EMC Corporation, “EMC RECOVERPOINT/EX;” Applied Technology; White Paper; Apr. 2012; 17 Pages. |
Feldman, “A Practical Approach to Business Continuity and Disaster Recovery Planning;” DFC International Computing Inc; Nov. 2010; 69 Pages. |
U.S. Appl. No. 14/496,783, filed Sep. 25, 2014, Natanzon et al. |
U.S. Appl. No. 14/496,790, filed Sep. 25, 2014, Cohen et al. |
U.S. Appl. No. 14/559,036, filed Dec. 3, 2014, Natanzon et al. |
U.S. Appl. No. 14/753,389, filed Jun. 29, 2015, Nir et al. |
U.S. Appl. No. 14/976,719, filed Dec. 21, 2015, Natanzon. |
U.S. Appl. No. 14/978,378, filed Dec. 22, 2015, Bigman et al. |
U.S. Appl. No. 15/085,148, filed Mar. 30, 2016, Baruch et al. |
U.S. Appl. No. 15/274,362, field Sep. 23, 2016, Baruch et al. |
U.S. Appl. No. 15/275,768, filed Sep. 26, 2016, Natanzon et al. |
U.S. Appl. No. 15/275,756, filed Sep. 26, 2016, Natanzon et al. |
U.S. Appl. No. 15/379,940, filed Dec. 15, 2016, Baruch et al. |
U.S. Appl. No. 15/380,013, filed Dec. 15, 2016, Baruch et al. |
U.S. Appl. No. 15/390,996, filed Dec. 27, 2016, Natanzon et al. |
U.S. Appl. No. 15/391,030, filed Dec. 27, 2016, Shemer et al. |
U.S. Appl. No. 15/970,243, filed May 3, 2018, Schneider et al. |
U.S. Appl. No. 16/052,037, filed Aug. 1, 2018, Schneider et al. |
U.S. Appl. No. 16/048,763, filed Jul. 30, 2018, Schneider et al. |
U.S. Appl. No. 16/050,400, filed Jul. 31, 2018, Alkalay et al. |
U.S. Appl. No. 16/179,295, filed Nov. 2, 2018, Natanzon et al. |
U.S. Appl. No. 16/261,174, filed Jan. 29, 2019, Natanzon et al. |
U.S. Appl. No. 16/368,008, filed Mar. 28, 2019, Natanzon et al. |