Methods for enhancing management of backup data sets and devices thereof

Information

  • Patent Grant
  • 9286298
  • Patent Number
    9,286,298
  • Date Filed
    Friday, October 14, 2011
    13 years ago
  • Date Issued
    Tuesday, March 15, 2016
    8 years ago
  • CPC
  • Field of Search
    • US
    • 707 674000
    • 707 654000
    • 707 640000
    • 707 655000
    • 707 659000
    • 707 646000
    • 707 E17005
    • 707 E17010
    • 707 822000
    • 707 613000
    • 707 E17001
    • 707 614000
    • 707 634000
    • 714 006000
    • 714 004000
    • 714 007000
    • 714 013000
    • 714 057000
    • 714 015000
    • 714 020000
    • 711 167000
    • 711 114000
    • 711 170000
    • 711 147000
    • 711 100000
    • 711 162000
    • 711 161000
    • 711 E12103
    • CPC
    • G06F17/30174
    • G06F11/00
    • G06F11/1471
    • G06F11/2038
    • G06F11/0784
    • G06F12/0813
    • G06F17/30067
    • G06F3/0608
    • G06F3/0601
    • G06F12/0246
    • G06F8/665
    • G06F9/5016
    • G06F11/1461
    • G06F11/1441
    • G06F9/4418
    • G06F11/0793
    • G06F11/1438
    • G06F11/20
    • G06F17/00
    • G06F17/30
    • G06F21/568
    • G06F21/575
  • International Classifications
    • G06F17/30
    • G06F9/44
    • Term Extension
      503
Abstract
A method, non-transitory computer readable medium, and apparatus that enhance management of backup data sets include receiving an operation on a region of a production data set. A corresponding region of a backup data set is marked as having a change state status until the received operation is completed on the region of the production data set and mirrored on a corresponding region of a backup data set.
Description
FIELD

This technology relates to methods for enhancing management of backup data sets to support transparent failover and devices thereof.


BACKGROUND

Synchronous mirroring of data sets assures that a backup copy of the data set is always a perfect replica of the current production copy of the data set. Therefore, failover can be performed automatically without human intervention since there is no question about the quality of the backup data.


Unfortunately, synchronous mirroring requires that an application's write be stalled until both the current production data set and the backup data set have been updated. This need to synchronize writes causes unacceptable performance loss in many real-world situations. The performance loss is particularly high when the backup data set is updated over a slow network link such as over a WAN to a cloud storage provider.


Asynchronous mirroring of data sets greatly reduces the performance penalty discussed above. With asynchronous mirroring, an application's write is acknowledged before the backup data set has been updated.


Unfortunately, this relaxed update comes at a cost. At the time of a failover, the backup copy is in an unknown state that may or may not represent the state of the production data at the time of the failover. This uncertainty greatly complicates the failover process. Essentially the challenge comes down to how a storage administrator determines that the quality of the data in the backup data set is “good enough” to support a successful failover.


SUMMARY

A method for enhancing management of backup data sets includes receiving at a virtualization management computing device an operation on a region of a production data set. A corresponding region of a backup data set is marked as having a change state status with the virtualization management computing device until the received operation is completed on the region of the production data set and mirrored on a corresponding region of a backup data set.


A computer readable medium having stored thereon instructions for enhancing management of backup data sets comprising machine executable code which when executed by at least one processor, causes the processor to perform including receiving an operation on a region of a production data set. A corresponding region of a backup data set is marked as having a change state status until the received operation is completed on the region of the production data set and mirrored on a corresponding region of a backup data set.


A virtualization management computing device includes a memory coupled to one or more processors which are configured to execute programmed instructions stored in the memory including receiving an operation on a region of a production data set. A corresponding region of a backup data set is marked as having a change state status until the received operation is completed on the region of the production data set and mirrored on a corresponding region of a backup data set.


This technology provides a number of advantages including providing a method, computer readable medium and apparatus that further enhances management of backup data sets to support transparent failover. This technology reduces the performance penalty associated with the highest-form of mirror data management, i.e. synchronous mirroring, while still maintaining the same assurance of data quality, i.e. never return an incorrect data set. This technology can be utilized with storage virtualization technology, applied to block or file storage, and implemented as part of a file server or integrated with WAN optimization technology. Further, this technology can be utilized with more complex mirror flows, such as mirroring data to another local location synchronously combined with remote change notification by way of example only.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram of an exemplary network environment with enhanced management of backup data sets to support transparent failover;



FIG. 2 is a partial block diagram and partial functional diagram of the exemplary environment illustrated in FIG. 1; and



FIG. 3 is a flow chart of an exemplary method for managing a write operation with a production data set and a backup data set; and



FIG. 4 is a flow chart of an exemplary method for managing a read operation with the production data set and the backup data set with a failover.





DETAILED DESCRIPTION

An exemplary environment 10 in which enhanced management of backup data sets is illustrated in FIGS. 1 and 2. The environment 10 includes a virtualization management computing device or VMCD 12, a plurality of client computing devices or CCD 14(1)-14(n), a production data storage device or PDSD 16, and a backup data storage device or BDSD 18 which are all coupled together by one or more communication networks 20(1)-20(4), although this environment can include other numbers and types of systems, devices, components, and elements in other configurations. This technology provides a number of advantages including providing improved management of backup data sets to support transparent failover.


The virtualization management computing device 12 provides a number of functions including management of backup data sets and virtualized access to these data sets between the client computing devices 14(1)-14(n) and data storage devices 16 and 18, although other numbers and types of systems can be used and other numbers and types of functions can be performed. In this particular example, the virtualization management computing device 12 includes an active VMCD node 320 and a standby VMCD node 322, although for example the active VMCD node and the standby VMCD node could each be in separate computing device. In this example, the virtualization management computing device 12 includes a central processing unit (CPU) or processor 22, a memory 24, and an interface system 26 which are coupled together by a bus or other link, although other numbers and types of systems, devices, components, and elements in other configurations production data storage device 16, and a backup data storage device 18 and locations can be used. The processor 22 executes a program of stored instructions for one or more aspects of the present technology as described and illustrated by way of the examples herein including for the active and standby VMCD nodes, although other types and numbers of processing devices and logic could be used and the processor 22 could execute other numbers and types of programmed instructions.


The memory 24 stores these programmed instructions for one or more aspects of the present technology as described and illustrated herein, although some or all of the programmed instructions could be stored and executed elsewhere. In this particular example, the memory 24 includes for the active VMCD node 320 a read error injector module (REI) 300, an invalid data table (IDT) 304, and a data mirror manager module (DMM) 306 and for the standby VMCD node 322 another read error injector module (REI) 310, another invalid data table (IDT) 314, and another data mirror manager module (DMM) 316 which are described in greater detail herein and in this example are all configured to be in communication with each other, although the memory 24 can comprise other types and numbers of modules, tables, and other programmed instructions and databases. A variety of different types of memory storage devices, such as a random access memory (RAM) or a read only memory (ROM) in the system or a floppy disk, hard disk, CD ROM, DVD ROM, or other computer readable medium which is read from and written to by a magnetic, optical, or other reading and writing system that is coupled to the processor 22, can be used for the memory 24.


The interface system in the virtualization management computing device 12 is used to operatively couple and communicate between the virtualization management computing device 12 and the client computing devices 14(1)-14(n), the production data storage device 16, and the backup data storage device 18 via one or more of the communications networks 20(1)-20(4), although other types and numbers of communication networks or systems with other types and numbers of connections and configurations can be used. By way of example only, the one or more the communications networks can use TCP/IP over Ethernet and industry-standard protocols, including NFS, CIFS, SOAP, XML, LDAP, and SNMP, although other types and numbers of communication networks, such as a direct connection, a local area network, a wide area network, modems and phone lines, e-mail, and wireless communication technology, each having their own communications protocols, can be used. In the exemplary environment 10 shown in FIGS. 1-2, four communication networks 20(1)-20(4) are illustrated, although other numbers and types could be used.


Each of the client computing devices 14(1)-14(n) utilizes the active VMCD node 320 in the virtualization management computing device 12 to conduct one or more operations with one or more of the production data storage device 16 and the backup data storage device 18, such as to create a file or directory, read or write a file, delete a file or directory, or rename a file or directory by way of example only, although other numbers and types of systems could be utilizing these resources and other types and numbers of functions utilizing other types of protocols could be performed. Each of the production data storage device 16 and the backup data storage device 18 store content, such as files and directories, although other numbers and types of storage systems which could have other numbers and types of functions and store other data could be used.


The client computing devices 14(1)-14(n), the production data storage device 16 and the backup data storage device 18 each include a central processing unit (CPU) or processor, a memory, and an interface or I/O system, which are coupled together by a bus or other link, although each could comprise other numbers and types of elements and component, such as control logic. The client computing devices 14(1)-14(n) may run interface applications, such as Web browsers, that may provide an interface to make requests for and send data to the production data storage device 16 and the backup data storage device 18 via the virtualization management computing device 12, although other types and numbers of applications may be executed. The virtualization management computing device 12 processes requests received from applications executing on client computing devices 14(1)-14(n) for files or directories on one or more of the production data storage device 16 and the backup data storage device 18 and also manages mirroring of data sets and interactions with the production data storage device 16 and the backup data storage device 18 as a result of these operations. The production data storage device 16 and the backup data storage device 18 provide access to stored data sets in response to received requests. Although a production data storage device 16 and a backup data storage device 18 are shown, other types and numbers of data storage devices can be used.


Although an example of the virtualization management computing device 12 with the active VMCD node 320 and the standby VMCD node 322, the plurality of client computing devices 14(1)-14(n), the production data storage device 16, and the backup data storage device 18 are described herein, each of these devices could be implemented in other configurations and manners on one or more of any suitable computer system or computing device. It is to be understood that the devices and systems of the examples described herein are for exemplary purposes, as many variations of the specific hardware and software used to implement the examples are possible, as will be appreciated by those skilled in the relevant art(s).


Furthermore, each of the systems of the examples may be conveniently implemented using one or more general purpose computer systems, microprocessors, digital signal processors, and micro-controllers, programmed according to the teachings of the examples, as described and illustrated herein, and as will be appreciated by those ordinary skill in the art.


In addition, two or more computing systems or devices can be substituted for any one of the systems in any embodiment of the examples. Accordingly, principles and advantages of distributed processing, such as redundancy and replication also can be implemented, as desired, to increase the robustness and performance of the devices and systems of the examples. The examples may also be implemented on computer system or systems that extend across any suitable network using any suitable interface mechanisms and communications technologies, including by way of example only telecommunications in any suitable form (e.g., voice and modem), wireless communications media, wireless communications networks, cellular communications networks, G3 communications networks, Public Switched Telephone Network (PSTNs), Packet Data Networks (PDNs), the Internet, intranets, and combinations thereof.


The examples may also be embodied as a computer readable medium having instructions stored thereon for one or more aspects of the present technology as described and illustrated by way of the examples herein, as described herein, which when executed by a processor, cause the processor to carry out the steps necessary to implement the methods of the examples, as described and illustrated herein.


An exemplary method for managing a write operation with a production data set and a backup data set will now be described with reference to FIGS. 1-4. In this illustrative example, initially there are duplicate stored data sets comprising a production data set stored in the production data storage device (PDSD) 16 and a backup data set stored in the backup data storage device (BDSD) 18, although there can be other numbers and types of data sets and storage devices in other configurations.


In step 100, the virtualization management computing device 12 receives a write request from the one of the client computing devices or CCDs 14(1)-14(b) with respect to a region in the product data set, although other types and numbers of operations could be requested and from other sources.


In step 102, the data mirror manager (DMM) 306 in the active VMCD node 320 detects the received write request and sends an invalidate notification to invalid table (IDT) 314. In this example, the invalidate notification includes sufficient information to describe the exact region of the product data set in PDSD 16 that is being modified or added to by the write operation. At some time in future IDT 314 in the standby VMCD 322 will persistently store this invalidate information and then acknowledge the notification from DMM 306. Each region in the backup data set in the NDSD 18 corresponding to an invalidate notification entry stored in the IDT 314 is considered to be in an invalid state, although other types of designations could be used. By way of example only, the list of change notification includes different types of change notifications, although the change notification can include other types or amounts of change notification. By extension, all regions in the backup data set in the BDSD 18 that do not have corresponding invalidate notification entry in IDT 314 are considered to be in a valid state, although other types of designations could be used.


In step 104, DMM 306 sends a write request including the new data to the designated region in the production data set stored at the production data storage device 16, although the new data could be written into other types and numbers of storage devices. At some time in the future PDSD 16 will acknowledge this write request has been successfully performed.


In step 106, DMM 306 sends a write request including the new data to the designated region in the backup data set stored at the backup data storage device 18, although the new data could be written into other types and numbers of behalf for the write operation. At some time in the future BDSD 18 will acknowledge this write request has been successfully performed.


In step 108, DMM 306 waits for incoming responses to the various requests it has issued on behalf of the original write request from one of the client computing devices 14(1)-14(n).


In step 110, the virtualization management computing device 12 determines if the requested write in the production data set in the production data storage device 18 was successful. If in step 110, the virtualization management computing device 12 determines the requested write in the production data set in the production data storage device 16 was successful, then the Yes branch is taken to step 112.


In step 112, the virtualization management computing device 12 determines if the requested update for the received write request in the invalid data table 314 in the standby VMCD 322 was successful. If in step 112, the virtualization management computing device 12 determines the requested update for the received write request in the invalid data table 314 in the standby VMCD 322 was successful, then the Yes branch is taken to step 114. In step 114, the virtualization management computing device 12 transmits the fast acknowledgement or FAST ACK, i.e. an acknowledgement before determining whether the corresponding write to the backup data set in the backup data storage device 18 was successful, to the requesting one of the client computing devices 14(1)-14(n) and then this exemplary method can end. In this example, the successful write to the invalid data table 314 would provide an indication that the data in the corresponding region in the backup data set in the backup data storage device 18 was invalid in case of a failover and request for that data.


If in step 112, the virtualization management computing device 12 determines the requested update for the received write request in the invalid data table 314 in the standby VMCD 322 was unsuccessful, then the No branch is taken to step 116. In step 116, the virtualization management computing device 12 transmits an indication that the write to the production data set in the production data storage device 16 was successful, but that the requested update for the received write request in the invalid data table 314 in the standby VMCD 322 was unsuccessful and then this exemplary method can end, although other manners for handling this type of error could be used depending on the particular administrative need.


If back in step 110, the virtualization management computing device 12 determines the requested write in the production data set in the production data storage device 16 was unsuccessful, then an invalid state is set for that region of the production data set in the invalid data table 304 and the No branch is taken to step 118. In step 118, the virtualization management computing device 12 determines if the requested write in the backup data set in the backup data storage device 18 was successful. If in step 118, the virtualization management computing device 12 determines the requested write in the backup data set in the backup data storage device 18 was successful, then the Yes branch is taken to step 120. In step 120, the virtualization management computing device 12 provides an indication to the requesting one of the client computing devices 14(1)-14(n) that the primary write to the production data set in the PDSD 16 failed, but that the backup write to the backup data set in the BDSD was successful and then this exemplary method can end.


If in step 118, the virtualization management computing device 12 determines the requested write in the backup data set in the backup data storage device 18 was unsuccessful, then the No branch is taken to step 122. In step 122, the virtualization management computing device 12 provides an indication to the requesting one of the client computing devices 14(1)-14(n) that the primary write to the production data set in the PDSD 16 and the backup write to the backup data set in the BDSD were both unsuccessful and then this exemplary method can end.


An exemplary method for managing a read operation will now be described with reference to FIGS. 1-2 and 4. In step 200 the virtualization management computing device 12 receives a read request from one of the client computing devices 14(1)-14(n), although other types and numbers of operations could be received and from other sources.


In step 202, the read error injector (REI) 300 in the active VMCD node 320 in the virtualization management computing device 12 determines if the received read request is for a region of the production data set in the PDSD 16 with a change notification entry in the IDT 304. If in step 202, the REI 300 in the virtualization management computing device 12 determines the read request is not for a region of the production data set with a change notification entry in the IDT 304, then the No branch is taken to step 204. In step 204, the virtualization management computing device 12 processes the read request and sends a response to the requesting one of the client computing devices 14(1)-14(n).


If back in step 202, the REI in the virtualization management computing device 12 determines the read request is for a region of the production data set in the PDSD 16 with a change notification entry in the IDT 304, then the Yes branch is taken to step 206. In step 206, the read error injector (REI) 300 in the active VMCD node 320 in the virtualization management computing device 12 determines if the received read request is for a region of the backup data set in the BDSD 18 with a change notification entry in the IDT 314. If in step 206, the read error injector (REI) 300 in the active VMCD node 320 in the virtualization management computing device 12 determines the received read request is not for a region of the backup data set in the BDSD 18 with a change notification entry in the IDT 314, then the Yes branch is taken to step 208. For ease of illustration in FIG. 2, not all of the connections between the elements are illustrated, for example the connection between REI 300 and IDT 314 is discussed, but not shown. As noted earlier, in this example read error injector module (REI) 300, invalid data table (IDT) 304, data mirror manager module (DMM) 306, read error injector module (REI) 310, invalid data table (IDT) 314, and another data mirror manager module (DMM) 316 are all configured to be in communication with each other, although other types of configurations can be used. In step 208, the virtualization management computing device 12 responds to the read request from one of the client computing devices 14(1)-14(n) with data from the corresponding region of the backup data set along with a primary fail error to indicate the data in the production data set in the PDSD 16 is invalid, although other manners for responding can be used.


If in step 206, the read error injector (REI) 300 in the active VMCD node 320 in the virtualization management computing device 12 determines the received read request is for a region of the backup data set in the BDSD 18 with a change notification entry in the IDT 314, then the No branch is taken to step 210. In step 210, the virtualization management computing device 12 can provide an indication to the requesting one of the client computing devices 14(1)-14(n) that the requested data in the production data set in the PDSD 16 and the backup data in the backup data set in the BDSD are both invalid, although, other manners for responding can be used and other numbers of optional steps could be executed as discussed below.


In this example, in step 210 the virtualization management computing device 12 optionally determines whether to delay or stall the requested read operation for a first period of time and send an inquiry or other notification to a designated administrator at an administrative computing device with a request for instructions on how to respond to the read operation and error. If in step 210, the virtualization management computing device 12 optionally determines to delay or stall the requested read operation for the first period of time and send an inquiry, then the Yes branch is taken to step 212.


In step 212, the virtualization management computing device 12 sends an inquiry or other notification to a designated administrator at an administrative computing device stored in memory 24 or otherwise designated with a request for instructions on how to respond to the read operation and error. In step 214, the virtualization management computing device 12 obtains a response to the request for instructions on how to respond to the read operation and error from the administrative computing device, processes the read request in accordance with those instructions and then this exemplary method can end.


If back in step 210, the virtualization management computing device 12 optionally determined not to delay or stall the requested read operation for the first period of time and send an inquiry, then the No branch is taken to step 216. In step 216, the virtualization management computing device 12 optionally can determine whether the error with the requested region of the data from the backup data set in the BDSD 18 is critical based on one or more factors. By way of example only, the virtualization management computing device 12 can determine whether the error indicated a corruption rate in the data below a set threshold for the type of data or can determine if the error could be compensated for by executing an error correction application, and then providing the data from the backup data set if the error was not critical.


If in step 216 the virtualization management computing device 12 optionally determines the error with the requested region of data is not critical based on one or more factors, then the No branch is taken to step 218. In step 218, the virtualization management computing device 12 processes the read request to obtain the requested data from the corresponding region in the backup data set in the backup data storage device 18 and provides this data in a response to the requesting one of the client computing devices 14(1)-14(n) and then this exemplary method can end.


If in step 216 the virtualization management computing device 12 optionally determines the error with the requested region of data is critical based on one or more factors, then the Yes branch is taken to step 220. In step 220, the virtualization management computing device 12 sends an error message in a response to the read request to the requesting one of the client computing devices 14(1)-14(n) and then this exemplary method can end.


Accordingly, as illustrated and described herein this technology provides this technology provides a hybrid solution that combines the data quality certainty of synchronous mirroring with the low performance impact of asynchronous mirroring. Additionally, this technology can be used to enhance a globally distributed data caching infrastructure.


Having thus described the basic concept of this technology, it will be rather apparent to those skilled in the art that the foregoing detailed disclosure is intended to be presented by way of example only, and is not limiting. Various alterations, improvements, and modifications will occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested hereby, and are within the spirit and scope of this technology. Additionally, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the claimed processes to any order except as may be specified in the claims. Accordingly, this technology is limited only by the following claims and equivalents thereto.

Claims
  • 1. A method for enhancing management of backup data sets, the method comprising: receiving, by a virtualization management computing device comprising a processor, a request to perform an operation on a region of a production data set;marking, by the virtualization management computing device comprising the processor, a region of a backup data set corresponding to the region of the production data set on which the operation will be performed, to indicate a change state status for the marked region of the backup data set; anddetermining, by the virtualization management computing device comprising the processor, when a critical error has occurred in the marked region of the backup data set based on a data corruption rate for the marked region of the backup data set exceeding a set threshold after performance of the operation is completed on the region of the production data set and mirrored on the marked region of the backup data set.
  • 2. The method as set forth in claim 1, further comprising: executing, by the virtualization management computing device comprising the processor, the received operation on the region of the production data set;mirroring, by the virtualization management computing device comprising the processor the executed operation on the corresponding region of the backup data set; andmarking, by the virtualization management computing device comprising the processor, the corresponding region of the backup data set as a valid state when the received operation is completed on the region of the production data set and mirrored on the corresponding region of the backup data set.
  • 3. The method as set forth in claim 2 wherein the marking by the virtualization management computing device the corresponding region of the backup data set as valid further comprises: executing, by the virtualization management computing device comprising the processor, a write operation by the virtualization management computing device to clear the change state status.
  • 4. The method as set forth in claim 1 further comprising: maintaining, by the virtualization management computing device comprising the processor, one or more tables of change state regions of the production data set and the backup data set.
  • 5. The method as set forth in claim 1 further comprising: maintaining, by the virtualization management computing device comprising the processor, a list of change notifications.
  • 6. The method as set forth in claim 1, further comprising: performing, by the virtualization management computing device comprising the processor, error correction on the marked region of the backup data set when the critical error was determined to have occurred.
  • 7. A method for enhancing management of backup data sets, the method comprising: receiving, by a virtualization management computing device comprising a processor, an operation on a region of a production data set;marking, by the virtualization management computing device comprising the processor, a corresponding region of a backup data set as having a change state status until the received operation is completed on the region of the production data set and mirrored on the corresponding region of the backup data set;receiving, by the virtualization management computing device comprising the processor, a read operation to the corresponding region of the backup data set as a result of a failover;determining, by the virtualization management computing device comprising the processor, an error with respect to the received read operation when the marking indicates the corresponding region of the backup data set has the change state status;receiving, by the virtualization management computing device comprising the processor, another operation to the corresponding region of the backup data set as a result of a failover; andexecuting, by the virtualization management computing device comprising the processor, the received another operation on the corresponding region of the backup data set when the marking indicates the corresponding region of the backup data set is valid.
  • 8. The method as set forth in claim 7 further comprising: returning, by the virtualization management computing device comprising the processor, an error message with respect to the received read operation when the error is determined.
  • 9. The method as set forth in claim 7 further comprising: determining, by the virtualization management computing device comprising the processor, when to attempt a recovery from the production data set when the error is determined; andexecuting, by the virtualization management computing device comprising the processor, the read operation on the production data set when the determining provides an indication to attempt the recovery.
  • 10. The method as set forth in claim 7 further comprising: introducing, by the virtualization management computing device comprising the processor, a delay to a response to the read operation for a first period of time when the error is determined; andproviding, by the virtualization management computing device comprising the processor, a request for instructions to a designated administrator error during the first period of time.
  • 11. The method as set forth in claim 7 further comprising: determining, by the virtualization management computing device comprising the processor, when the error is critical based on one or more factors when the error is determined; andexecuting, by the virtualization management computing device comprising the processor, the read operation on the backup data set when the error is determined not to be critical.
  • 12. A non-transitory computer readable medium having stored thereon instructions for enhancing management of backup data sets comprising machine executable code which when executed by at least one processor, causes the processor to perform steps comprising: receiving a request to perform an operation on a region of a production data set;marking a region of a backup data set corresponding to the region of the production data set on which the operation will be performed, to indicate a change state status for the marked region of the backup data set; anddetermining when a critical error has occurred in the marked region of the backup data set based on a data corruption rate for the marked region of the backup data set exceeding a set threshold after performance of the operation is completed on the region of the production data set and mirrored on the marked region of the backup data set.
  • 13. The medium as set forth in claim 12, further having stored thereon instructions that when executed by the processor cause the processor to perform steps further comprising: executing the received operation on the region of the production data set;mirroring the executed operation on the corresponding region of the backup data set; andmarking the corresponding region of the backup data set as a valid state when the received operation is completed on the region of the production data set and mirrored on the corresponding region of the backup data set.
  • 14. The medium as set forth in claim 13 wherein the marking the corresponding region of the backup data set as valid further comprises: executing a write operation to clear the change state status.
  • 15. The medium as set forth in claim 12 further having stored thereon instructions that when executed by the processor cause the processor to perform steps further comprising: maintaining one or more tables of change state status regions of the production data set and the backup data set.
  • 16. The medium as set forth in claim 12 further having stored thereon instructions that when executed by the processor cause the processor to perform steps further comprising: maintaining a list of change notifications.
  • 17. The medium as set forth in claim 12, further having stored thereon instructions that when executed by the processor cause the processor to perform steps further comprising: performing error correction on the marked region of the backup data set when the critical error was determined to have occurred.
  • 18. A non-transitory computer readable medium having stored thereon instructions for enhancing management of backup data sets comprising machine executable code which when executed by at least one processor, causes the processor to perform steps comprising: receiving an operation on a region of a production data set;marking a corresponding region of a backup data set as having a change state status until the received operation is completed on the region of the production data set and mirrored on the corresponding region of the backup data set;receiving a read operation to the corresponding region of the backup data set as a result of a failover;determining an error with respect to the received read operation when the marking indicates the corresponding region of the backup data set has the change state status;receiving another operation to the corresponding region of the backup data set as a result of a failover; andexecuting the received another operation on the corresponding region of the backup data set when the marking indicates the corresponding region of the backup data set is valid.
  • 19. The medium as set forth in claim 18 further having stored thereon instructions that when executed by the processor cause the processor to perform steps further comprising: returning an error message with respect to the received read operation when the error is determined.
  • 20. The medium as set forth in claim 18 further having stored thereon instructions that when executed by the processor cause the processor to perform steps further comprising: determining when to attempt a recovery from the production data set when the error is determined; andexecuting the read operation on the production data set when the determining provides an indication to attempt the recovery.
  • 21. The medium as set forth in claim 18 further having stored thereon instructions that when executed by the processor cause the processor to perform steps further comprising: introducing a delay to a response to the read operation for a first period of time when the error is determined; andproviding a request for instructions to a designated administrator error during the first period of time.
  • 22. The medium as set forth in claim 18 further having stored thereon instructions that when executed by the processor cause the processor to perform steps further comprising: determining when the error is critical based on one or more factors when the error is determined; andexecuting the read operation on the backup data set when the error is determined not to be critical.
  • 23. A virtualization management computing device comprising: one or more processors;a memory coupled to the one or more processors which are configured to be capable of executing programmed instructions, which comprise the programmed instructions stored in the memory to:receive a request to perform an operation on a region of a production data set;mark a region of a backup data set corresponding to the region of the production data set on which the operation will be performed, to indicate a change state status for the marked region of the backup data set; anddetermine when a critical error has occurred in the marked region of the backup data set based on a data corruption rate for the marked region of the backup data set exceeding a set threshold after performance of the operation is completed on the region of the production data set and mirrored on the marked region of the backup data set.
  • 24. The device as set forth in claim 23, wherein the one or more processors is further configured to be capable of executing programmed instructions, which comprise the programmed instructions stored in the memory to: execute the received operation on the region of the production data set;mirror the executed operation on the corresponding region of the backup data set; andmark the corresponding region of the backup data set as a valid state when the received operation is completed on the region of the production data set and mirrored on the corresponding region of the backup data set.
  • 25. The device as set forth in claim 24 wherein the one or more processors is further configured to be capable of executing programmed instructions, which comprise the programmed instructions stored in the memory to: mark the corresponding region of the backup data set as valid further comprising executing a write operation to clear the change state status.
  • 26. The device as set forth in claim 23 wherein the one or more processors is further configured to be capable of executing programmed instructions comprising and stored in the memory to: maintain one or more tables of change state status regions of the production data set and the backup data set.
  • 27. The device as set forth in claim 23 wherein the one or more processors is further configured to be capable of executing programmed instructions comprising and stored in the memory to: maintain a list of change notifications.
  • 28. The device as set forth in claim 23, wherein the one or more processors is further configured to be capable of executing programmed instructions comprising and stored in the memory to: perform error correction on the marked region of the backup data set when the critical error was determined to have occurred.
  • 29. A virtualization management computing device comprising: one or more processors;a memory coupled to the one or more processors which are configured to be capable of executing programmed instructions comprising and stored in the memory to:receive an operation on a region of a production data set;mark a corresponding region of a backup data set as having a change state status until the received operation is completed on the region of the production data set and mirrored on the corresponding region of the backup data set;receive a read operation to the corresponding region of the backup data set as a result of a failover anddetermine an error with respect to the received read operation when the marking indicates the corresponding region of the backup data set has the change state status;receive another operation to the corresponding region of the backup data set as a result of a failover; andexecute the received another operation on the corresponding region of the backup data set when the marking indicates the corresponding region of the backup data set is valid.
  • 30. The device as set forth in claim 29 wherein the one or more processors is further configured to be capable of executing programmed instructions comprising and stored in the memory to: return an error message with respect to the received read operation when the error is determined.
  • 31. The device as set forth in claim 29 wherein the one or more processors is further configured to be capable of executing programmed instructions comprising and stored in the memory to: determine when to attempt a recovery from the production data set when the error is determined; andexecute the read operation on the production data set when the determining provides an indication to attempt the recovery.
  • 32. The device as set forth in claim 29 wherein the one or more processors is further configured to be capable of executing programmed instructions comprising and stored in the memory to: delay a response to the read operation for a first period of time when the error is determined; andprovide a request for instructions to a designated administrator error during the first period of time.
  • 33. The device as set forth in claim 29 wherein the one or more processors is further configured to be capable of executing programmed instructions comprising and stored in the memory to: determine when the error is critical based on one or more factors when the error is determined; andexecute the read operation on the backup data set when the error is determined not to be critical.
Parent Case Info

This application claims the benefit of U.S. Provisional Patent Application Ser. No. 61/393,194 filed Oct. 14, 2010, which is hereby incorporated by reference in its entirety.

US Referenced Citations (501)
Number Name Date Kind
4993030 Krakauer et al. Feb 1991 A
5218695 Noveck et al. Jun 1993 A
5303368 Kotaki Apr 1994 A
5410667 Belsan Apr 1995 A
5473362 Fitzgerald et al. Dec 1995 A
5511177 Kagimasa et al. Apr 1996 A
5537585 Blickenstaff et al. Jul 1996 A
5548724 Akizawa et al. Aug 1996 A
5550965 Gabbe et al. Aug 1996 A
5583995 Gardner et al. Dec 1996 A
5586260 Hu Dec 1996 A
5590320 Maxey Dec 1996 A
5623490 Richter et al. Apr 1997 A
5644698 Cannon Jul 1997 A
5649194 Miller et al. Jul 1997 A
5649200 Leblang et al. Jul 1997 A
5668943 Attanasio et al. Sep 1997 A
5692180 Lee Nov 1997 A
5721779 Funk Feb 1998 A
5724512 Winterbottom Mar 1998 A
5806061 Chaudhuri et al. Sep 1998 A
5832496 Anand et al. Nov 1998 A
5832522 Blickenstaff et al. Nov 1998 A
5838970 Thomas Nov 1998 A
5862325 Reed et al. Jan 1999 A
5884303 Brown Mar 1999 A
5889935 Ofek et al. Mar 1999 A
5893086 Schmuck et al. Apr 1999 A
5897638 Lasser et al. Apr 1999 A
5901327 Ofek May 1999 A
5905990 Inglett May 1999 A
5917998 Cabrera et al. Jun 1999 A
5920873 Van Huben et al. Jul 1999 A
5926816 Bauer et al. Jul 1999 A
5937406 Balabine et al. Aug 1999 A
5991302 Berl et al. Nov 1999 A
5995491 Richter et al. Nov 1999 A
5999664 Mahoney et al. Dec 1999 A
6012083 Savitzky et al. Jan 2000 A
6029168 Frey Feb 2000 A
6044367 Wolff Mar 2000 A
6044444 Ofek Mar 2000 A
6047129 Frye Apr 2000 A
6072942 Stockwell et al. Jun 2000 A
6078929 Rao Jun 2000 A
6085234 Pitts et al. Jul 2000 A
6088694 Burns et al. Jul 2000 A
6088759 Hasbun Jul 2000 A
6088769 Hasbun et al. Jul 2000 A
6104706 Richter et al. Aug 2000 A
6128627 Mattis et al. Oct 2000 A
6128717 Harrison et al. Oct 2000 A
6161145 Bainbridge et al. Dec 2000 A
6161185 Guthrie et al. Dec 2000 A
6181336 Chiu et al. Jan 2001 B1
6182188 Hasbun Jan 2001 B1
6202071 Keene Mar 2001 B1
6202156 Kalajan Mar 2001 B1
6223206 Dan et al. Apr 2001 B1
6226759 Miller May 2001 B1
6233648 Tomita May 2001 B1
6237008 Beal et al. May 2001 B1
6256031 Meijer et al. Jul 2001 B1
6282610 Bergsten Aug 2001 B1
6289345 Yasue Sep 2001 B1
6308162 Ouimet et al. Oct 2001 B1
6311290 Hasbun Oct 2001 B1
6324581 Xu et al. Nov 2001 B1
6329985 Tamer et al. Dec 2001 B1
6339785 Feigenbaum Jan 2002 B1
6349343 Foody et al. Feb 2002 B1
6370543 Hoffert et al. Apr 2002 B2
6374263 Bunger et al. Apr 2002 B1
6374336 Peters et al. Apr 2002 B1
6389433 Bolosky et al. May 2002 B1
6393581 Friedman et al. May 2002 B1
6397246 Wolfe May 2002 B1
6412004 Chen et al. Jun 2002 B1
6438595 Blumenau et al. Aug 2002 B1
6466580 Leung Oct 2002 B1
6469983 Narayana et al. Oct 2002 B2
6477544 Bolosky et al. Nov 2002 B1
6487561 Ofek et al. Nov 2002 B1
6493804 Soltis et al. Dec 2002 B1
6516350 Lumelsky et al. Feb 2003 B1
6516351 Borr Feb 2003 B2
6542909 Tamer et al. Apr 2003 B1
6549916 Sedlar Apr 2003 B1
6553352 Delurgio et al. Apr 2003 B2
6556997 Levy Apr 2003 B1
6556998 Mukherjee et al. Apr 2003 B1
6560230 Li et al. May 2003 B1
6601101 Lee et al. Jul 2003 B1
6606663 Liao et al. Aug 2003 B1
6612490 Herrendoerfer et al. Sep 2003 B1
6654346 Mahalingaiah et al. Nov 2003 B1
6697871 Hansen Feb 2004 B1
6704755 Midgley et al. Mar 2004 B2
6721794 Taylor et al. Apr 2004 B2
6728265 Yavatkar et al. Apr 2004 B1
6738357 Richter et al. May 2004 B1
6738790 Klein et al. May 2004 B1
6742035 Zayas et al. May 2004 B1
6744776 Kalkunte et al. Jun 2004 B1
6748420 Quatrano et al. Jun 2004 B1
6754215 Arikawa et al. Jun 2004 B1
6757706 Dong et al. Jun 2004 B1
6775673 Mahalingam et al. Aug 2004 B2
6775679 Gupta Aug 2004 B2
6782450 Arnott et al. Aug 2004 B2
6801960 Ericson et al. Oct 2004 B1
6826613 Wang et al. Nov 2004 B1
6839761 Kadyk et al. Jan 2005 B2
6847959 Arrouye et al. Jan 2005 B1
6847970 Keller et al. Jan 2005 B2
6850997 Rooney et al. Feb 2005 B1
6868439 Basu et al. Mar 2005 B2
6871245 Bradley Mar 2005 B2
6880017 Marce et al. Apr 2005 B1
6889249 Miloushev et al. May 2005 B2
6914881 Mansfield et al. Jul 2005 B1
6922688 Frey, Jr. Jul 2005 B1
6934706 Mancuso et al. Aug 2005 B1
6938039 Bober et al. Aug 2005 B1
6938059 Tamer et al. Aug 2005 B2
6959373 Testardi Oct 2005 B2
6961815 Kistler et al. Nov 2005 B2
6973455 Vahalia et al. Dec 2005 B1
6973549 Testardi Dec 2005 B1
6975592 Seddigh et al. Dec 2005 B1
6985936 Agarwalla et al. Jan 2006 B2
6985956 Luke et al. Jan 2006 B2
6986015 Testardi Jan 2006 B2
6990114 Erimli et al. Jan 2006 B1
6990547 Ulrich et al. Jan 2006 B2
6990667 Ulrich et al. Jan 2006 B2
6996841 Kadyk et al. Feb 2006 B2
7003533 Noguchi et al. Feb 2006 B2
7006981 Rose et al. Feb 2006 B2
7010553 Chen et al. Mar 2006 B2
7013379 Testardi Mar 2006 B1
7020644 Jameson Mar 2006 B2
7020669 McCann et al. Mar 2006 B2
7024427 Bobbitt et al. Apr 2006 B2
7039061 Connor et al. May 2006 B2
7051112 Dawson May 2006 B2
7054998 Arnott et al. May 2006 B2
7055010 Lin et al. May 2006 B2
7072917 Wong et al. Jul 2006 B2
7075924 Richter et al. Jul 2006 B2
7089286 Malik Aug 2006 B1
7111115 Peters et al. Sep 2006 B2
7113962 Kee et al. Sep 2006 B1
7120728 Krakirian et al. Oct 2006 B2
7120746 Campbell et al. Oct 2006 B2
7127556 Blumenau et al. Oct 2006 B2
7133967 Fujie et al. Nov 2006 B2
7143146 Nakatani et al. Nov 2006 B2
7146524 Patel et al. Dec 2006 B2
7152184 Maeda et al. Dec 2006 B2
7155466 Rodriguez et al. Dec 2006 B2
7165095 Sim Jan 2007 B2
7167821 Hardwick et al. Jan 2007 B2
7171469 Ackaouy et al. Jan 2007 B2
7173929 Testardi Feb 2007 B1
7194579 Robinson et al. Mar 2007 B2
7197615 Arakawa et al. Mar 2007 B2
7206863 Oliveira Apr 2007 B1
7216264 Glade May 2007 B1
7219260 de Forest et al. May 2007 B1
7234074 Cohn et al. Jun 2007 B2
7236491 Tsao et al. Jun 2007 B2
7237076 Nakano et al. Jun 2007 B2
7269582 Winter et al. Sep 2007 B2
7272654 Brendel Sep 2007 B1
7280536 Testardi Oct 2007 B2
7284150 Ma et al. Oct 2007 B2
7293097 Borr Nov 2007 B2
7293099 Kalajan Nov 2007 B1
7293133 Colgrove et al. Nov 2007 B1
7308475 Pruitt et al. Dec 2007 B1
7343398 Lownsbrough Mar 2008 B1
7346664 Wong et al. Mar 2008 B2
7373345 Carpentier et al. May 2008 B2
7373520 Borthakur et al. May 2008 B1
7383288 Miloushev et al. Jun 2008 B2
7383463 Hayden et al. Jun 2008 B2
7401220 Bolosky et al. Jul 2008 B2
7406484 Srinivasan et al. Jul 2008 B1
7415488 Muth et al. Aug 2008 B1
7415608 Bolosky et al. Aug 2008 B2
7440982 Lu et al. Oct 2008 B2
7457982 Rajan Nov 2008 B2
7467158 Marinescu Dec 2008 B2
7475241 Patel et al. Jan 2009 B2
7477796 Sasaki et al. Jan 2009 B2
7496367 Ozturk et al. Feb 2009 B1
7509322 Miloushev et al. Mar 2009 B2
7512673 Miloushev et al. Mar 2009 B2
7519813 Cox et al. Apr 2009 B1
7562110 Miloushev et al. Jul 2009 B2
7571168 Bahar et al. Aug 2009 B2
7574433 Engel Aug 2009 B2
7587471 Yasuda et al. Sep 2009 B2
7590747 Coates et al. Sep 2009 B2
7599941 Bahar et al. Oct 2009 B2
7610307 Havewala et al. Oct 2009 B2
7610390 Yared et al. Oct 2009 B2
7620775 Waxman Nov 2009 B1
7624109 Testardi Nov 2009 B2
7639883 Gill Dec 2009 B2
7644109 Manley et al. Jan 2010 B2
7653699 Colgrove et al. Jan 2010 B1
7656788 Ma et al. Feb 2010 B2
7680836 Anderson et al. Mar 2010 B2
7685126 Patel et al. Mar 2010 B2
7685177 Hagerstrom et al. Mar 2010 B1
7689596 Tsunoda Mar 2010 B2
7694082 Golding et al. Apr 2010 B2
7711771 Kimos May 2010 B2
7725763 Vertes et al. May 2010 B2
7734603 McManis Jun 2010 B1
7743031 Cameron et al. Jun 2010 B1
7743035 Chen et al. Jun 2010 B2
7752294 Meyer et al. Jul 2010 B2
7769711 Srinivasan et al. Aug 2010 B2
7788335 Miloushev et al. Aug 2010 B2
7805470 Armangau et al. Sep 2010 B2
7809691 Karmarkar et al. Oct 2010 B1
7818299 Federwisch et al. Oct 2010 B1
7822939 Veprinsky et al. Oct 2010 B1
7831639 Panchbudhe et al. Nov 2010 B1
7849112 Mane et al. Dec 2010 B2
7853958 Mathew et al. Dec 2010 B2
7870154 Shitomi et al. Jan 2011 B2
7877511 Berger et al. Jan 2011 B1
7885970 Lacapra Feb 2011 B2
7886218 Watson Feb 2011 B2
7889734 Hendel et al. Feb 2011 B1
7900002 Lyon Mar 2011 B2
7903554 Manur et al. Mar 2011 B1
7904466 Valencia et al. Mar 2011 B1
7913053 Newland Mar 2011 B1
7937421 Mikesell et al. May 2011 B2
7953085 Chang et al. May 2011 B2
7953701 Okitsu et al. May 2011 B2
7958347 Ferguson Jun 2011 B1
7984108 Landis et al. Jul 2011 B2
8005953 Miloushev et al. Aug 2011 B2
8010756 Linde Aug 2011 B1
8015157 Kamei et al. Sep 2011 B2
8046547 Chatterjee et al. Oct 2011 B1
8074107 Sivasubramanian et al. Dec 2011 B2
8103622 Karinta Jan 2012 B1
8112392 Bunnell et al. Feb 2012 B1
8171124 Kondamuru May 2012 B2
8180747 Marinkovic et al. May 2012 B2
8195760 Lacapra et al. Jun 2012 B2
8204860 Ferguson et al. Jun 2012 B1
8209403 Szabo et al. Jun 2012 B2
8239354 Lacapra et al. Aug 2012 B2
8271751 Hinrichs, Jr. Sep 2012 B2
8306948 Chou et al. Nov 2012 B2
8326798 Driscoll et al. Dec 2012 B1
8351600 Resch Jan 2013 B2
8352785 Nicklin et al. Jan 2013 B1
8392372 Ferguson et al. Mar 2013 B2
8396895 Miloushev et al. Mar 2013 B2
8397059 Ferguson Mar 2013 B1
8400919 Amdahl et al. Mar 2013 B1
8417681 Miloushev et al. Apr 2013 B1
8417746 Gillett et al. Apr 2013 B1
8433735 Lacapra Apr 2013 B2
8463850 McCann Jun 2013 B1
8468542 Jacobson et al. Jun 2013 B2
8498951 Baluja et al. Jul 2013 B1
8548953 Wong et al. Oct 2013 B2
8549582 Andrews et al. Oct 2013 B1
8572007 Manadhata et al. Oct 2013 B1
8576283 Foster et al. Nov 2013 B1
8595547 Sivasubramanian et al. Nov 2013 B1
8620879 Cairns Dec 2013 B2
8676753 Sivasubramanian et al. Mar 2014 B2
8682916 Wong et al. Mar 2014 B2
8725692 Natanzon May 2014 B1
8745266 Agarwal et al. Jun 2014 B2
8954492 Lowell, Jr. Feb 2015 B1
9020912 Majee et al. Apr 2015 B1
20010007560 Masuda et al. Jul 2001 A1
20010014891 Hoffert et al. Aug 2001 A1
20010047293 Waller et al. Nov 2001 A1
20010051955 Wong Dec 2001 A1
20020035537 Waller et al. Mar 2002 A1
20020059263 Shima et al. May 2002 A1
20020065810 Bradley May 2002 A1
20020073105 Noguchi et al. Jun 2002 A1
20020083118 Sim Jun 2002 A1
20020087887 Busam et al. Jul 2002 A1
20020106263 Winker Aug 2002 A1
20020120763 Miloushev et al. Aug 2002 A1
20020133330 Loisey et al. Sep 2002 A1
20020133491 Sim et al. Sep 2002 A1
20020143909 Botz et al. Oct 2002 A1
20020147630 Rose et al. Oct 2002 A1
20020150253 Brezak et al. Oct 2002 A1
20020156905 Weissman Oct 2002 A1
20020161911 Pinckney, III et al. Oct 2002 A1
20020178410 Haitsma et al. Nov 2002 A1
20020188667 Kirnos Dec 2002 A1
20020194342 Lu et al. Dec 2002 A1
20030009429 Jameson Jan 2003 A1
20030012382 Ferchichi et al. Jan 2003 A1
20030028514 Lord et al. Feb 2003 A1
20030033308 Patel et al. Feb 2003 A1
20030033535 Fisher et al. Feb 2003 A1
20030061240 McCann et al. Mar 2003 A1
20030065956 Belapurkar et al. Apr 2003 A1
20030072318 Lam et al. Apr 2003 A1
20030088671 Klinker et al. May 2003 A1
20030115218 Bobbitt et al. Jun 2003 A1
20030115439 Mahalingam et al. Jun 2003 A1
20030128708 Inoue et al. Jul 2003 A1
20030140210 Testardi Jul 2003 A1
20030149781 Yared et al. Aug 2003 A1
20030156586 Lee et al. Aug 2003 A1
20030159072 Bellinger et al. Aug 2003 A1
20030171978 Jenkins et al. Sep 2003 A1
20030177364 Walsh et al. Sep 2003 A1
20030177388 Botz et al. Sep 2003 A1
20030179755 Fraser Sep 2003 A1
20030200207 Dickinson Oct 2003 A1
20030204635 Ko et al. Oct 2003 A1
20040003266 Moshir et al. Jan 2004 A1
20040006575 Visharam et al. Jan 2004 A1
20040010654 Yasuda et al. Jan 2004 A1
20040017825 Stanwood et al. Jan 2004 A1
20040025013 Parker et al. Feb 2004 A1
20040028043 Maveli et al. Feb 2004 A1
20040028063 Roy et al. Feb 2004 A1
20040030857 Krakirian et al. Feb 2004 A1
20040044705 Stager et al. Mar 2004 A1
20040054748 Ackaouy et al. Mar 2004 A1
20040054777 Ackaouy et al. Mar 2004 A1
20040093474 Lin et al. May 2004 A1
20040098383 Tabellion et al. May 2004 A1
20040098595 Aupperle et al. May 2004 A1
20040133577 Miloushev et al. Jul 2004 A1
20040133606 Miloushev et al. Jul 2004 A1
20040139355 Axel et al. Jul 2004 A1
20040148380 Meyer et al. Jul 2004 A1
20040153479 Mikesell et al. Aug 2004 A1
20040181605 Nakatani et al. Sep 2004 A1
20040210731 Chatterjee et al. Oct 2004 A1
20040213156 Smallwood et al. Oct 2004 A1
20040236798 Srinivasan et al. Nov 2004 A1
20040250032 Ji et al. Dec 2004 A1
20050021615 Arnott et al. Jan 2005 A1
20050027862 Nguyen et al. Feb 2005 A1
20050050107 Mane et al. Mar 2005 A1
20050071589 Tross et al. Mar 2005 A1
20050091214 Probert et al. Apr 2005 A1
20050108575 Yung May 2005 A1
20050114291 Becker-Szendy et al. May 2005 A1
20050114701 Atkins et al. May 2005 A1
20050117589 Douady et al. Jun 2005 A1
20050160161 Barrett et al. Jul 2005 A1
20050160243 Lubbers et al. Jul 2005 A1
20050175013 Le Pennec et al. Aug 2005 A1
20050187866 Lee Aug 2005 A1
20050193245 Hayden et al. Sep 2005 A1
20050198501 Andreev et al. Sep 2005 A1
20050213570 Stacy et al. Sep 2005 A1
20050213587 Cho et al. Sep 2005 A1
20050240756 Mayer Oct 2005 A1
20050246393 Coates et al. Nov 2005 A1
20050289109 Arrouye et al. Dec 2005 A1
20050289111 Tribble et al. Dec 2005 A1
20060010502 Mimatsu et al. Jan 2006 A1
20060045096 Farmer et al. Mar 2006 A1
20060074922 Nishimura Apr 2006 A1
20060075475 Boulos et al. Apr 2006 A1
20060080353 Miloushev et al. Apr 2006 A1
20060106882 Douceur et al. May 2006 A1
20060112151 Manley et al. May 2006 A1
20060112399 Lessly May 2006 A1
20060117048 Thind Jun 2006 A1
20060123062 Bobbitt et al. Jun 2006 A1
20060140193 Kakani et al. Jun 2006 A1
20060153201 Hepper et al. Jul 2006 A1
20060167838 Lacapra Jul 2006 A1
20060179261 Rajan Aug 2006 A1
20060184589 Lees et al. Aug 2006 A1
20060190496 Tsunoda Aug 2006 A1
20060200470 Lacapra et al. Sep 2006 A1
20060206547 Kulkarni et al. Sep 2006 A1
20060212746 Amegadzie et al. Sep 2006 A1
20060218135 Bisson et al. Sep 2006 A1
20060224636 Kathuria et al. Oct 2006 A1
20060224687 Popkin et al. Oct 2006 A1
20060230265 Krishna Oct 2006 A1
20060242179 Chen et al. Oct 2006 A1
20060259949 Schaefer et al. Nov 2006 A1
20060268692 Wright et al. Nov 2006 A1
20060268932 Singh et al. Nov 2006 A1
20060270341 Kim et al. Nov 2006 A1
20060271598 Wong et al. Nov 2006 A1
20060277225 Mark et al. Dec 2006 A1
20060282461 Marinescu Dec 2006 A1
20060282471 Mark et al. Dec 2006 A1
20060294164 Armangau et al. Dec 2006 A1
20070016754 Testardi Jan 2007 A1
20070024919 Wong et al. Feb 2007 A1
20070027929 Whelan Feb 2007 A1
20070027935 Haselton et al. Feb 2007 A1
20070028068 Golding et al. Feb 2007 A1
20070061441 Landis et al. Mar 2007 A1
20070088702 Fridella et al. Apr 2007 A1
20070128899 Mayer Jun 2007 A1
20070136308 Tsirigotis et al. Jun 2007 A1
20070139227 Speirs, II et al. Jun 2007 A1
20070150481 Song et al. Jun 2007 A1
20070180314 Kawashima et al. Aug 2007 A1
20070208748 Li Sep 2007 A1
20070209075 Coffman Sep 2007 A1
20070226331 Srinivasan et al. Sep 2007 A1
20070239944 Rupanagunta et al. Oct 2007 A1
20070260830 Faibish et al. Nov 2007 A1
20080046432 Anderson et al. Feb 2008 A1
20080070575 Claussen et al. Mar 2008 A1
20080104344 Shimozono May 2008 A1
20080104347 Iwamura et al. May 2008 A1
20080104443 Akutsu et al. May 2008 A1
20080114718 Anderson et al. May 2008 A1
20080177994 Mayer Jul 2008 A1
20080189468 Schmidt et al. Aug 2008 A1
20080200207 Donahue et al. Aug 2008 A1
20080208933 Lyon Aug 2008 A1
20080209073 Tang Aug 2008 A1
20080215836 Sutoh et al. Sep 2008 A1
20080222223 Srinivasan et al. Sep 2008 A1
20080243769 Arbour et al. Oct 2008 A1
20080263401 Stenzel Oct 2008 A1
20080282047 Arakawa et al. Nov 2008 A1
20080294446 Guo et al. Nov 2008 A1
20090007162 Sheehan Jan 2009 A1
20090013138 Sudhakar Jan 2009 A1
20090019535 Mishra et al. Jan 2009 A1
20090037500 Kirshenbaum Feb 2009 A1
20090037975 Ishikawa et al. Feb 2009 A1
20090041230 Williams Feb 2009 A1
20090049260 Upadhyayula Feb 2009 A1
20090055507 Oeda Feb 2009 A1
20090055607 Schack et al. Feb 2009 A1
20090077097 Lacapra et al. Mar 2009 A1
20090077312 Miura Mar 2009 A1
20090089344 Brown et al. Apr 2009 A1
20090094252 Wong et al. Apr 2009 A1
20090106255 Lacapra et al. Apr 2009 A1
20090106263 Khalid et al. Apr 2009 A1
20090132616 Winter et al. May 2009 A1
20090161542 Ho Jun 2009 A1
20090187915 Chew et al. Jul 2009 A1
20090204649 Wong et al. Aug 2009 A1
20090204650 Wong et al. Aug 2009 A1
20090204705 Marinov et al. Aug 2009 A1
20090210431 Marinkovic et al. Aug 2009 A1
20090210875 Bolles et al. Aug 2009 A1
20090240705 Miloushev et al. Sep 2009 A1
20090240899 Akagawa et al. Sep 2009 A1
20090254592 Marinov et al. Oct 2009 A1
20090265396 Ram et al. Oct 2009 A1
20090313503 Atluri et al. Dec 2009 A1
20100017643 Baba et al. Jan 2010 A1
20100030777 Panwar et al. Feb 2010 A1
20100061232 Zhou et al. Mar 2010 A1
20100082542 Feng et al. Apr 2010 A1
20100122248 Robinson et al. May 2010 A1
20100199042 Bates et al. Aug 2010 A1
20100205206 Rabines et al. Aug 2010 A1
20100211547 Kamei et al. Aug 2010 A1
20100325257 Goel et al. Dec 2010 A1
20100325634 Ichikawa et al. Dec 2010 A1
20110083185 Sheleheda et al. Apr 2011 A1
20110087696 Lacapra Apr 2011 A1
20110093471 Brockway et al. Apr 2011 A1
20110099146 McAlister et al. Apr 2011 A1
20110099420 MacDonald McAlister et al. Apr 2011 A1
20110107112 Resch May 2011 A1
20110119234 Schack et al. May 2011 A1
20110255537 Ramasamy et al. Oct 2011 A1
20110296411 Tang et al. Dec 2011 A1
20110320882 Beaty et al. Dec 2011 A1
20120042115 Young Feb 2012 A1
20120078856 Linde Mar 2012 A1
20120144229 Nadolski Jun 2012 A1
20120150699 Trapp et al. Jun 2012 A1
20120246637 Kreeger et al. Sep 2012 A1
20130007239 Agarwal et al. Jan 2013 A1
20130058252 Casado et al. Mar 2013 A1
20130058255 Casado et al. Mar 2013 A1
20140372599 Gutt et al. Dec 2014 A1
Foreign Referenced Citations (21)
Number Date Country
2003300350 Jul 2004 AU
2080530 Apr 1994 CA
2512312 Jul 2004 CA
0605088 Feb 1996 EP
0 738 970 Oct 1996 EP
63010250 Jan 1988 JP
6205006 Jul 1994 JP
06-332782 Dec 1994 JP
8-021924 Mar 1996 JP
08-328760 Dec 1996 JP
08-339355 Dec 1996 JP
9016510 Jan 1997 JP
11282741 Oct 1999 JP
2000-183935 Jun 2000 JP
566291 Dec 2008 NZ
0239696 May 2002 WO
WO 02056181 Jul 2002 WO
WO 2004061605 Jul 2004 WO
2006091040 Aug 2006 WO
WO 2008130983 Oct 2008 WO
WO 2008147973 Dec 2008 WO
Non-Patent Literature Citations (91)
Entry
Debnath et al.—“ChunkStash: Speeding Up Inline Storage Deduplication Using Flash Memory”—USENIX annual technical conference, 2010—usenix.org—pp. 1-15.
Oracle® Secure Backup Reference Release 10.1 B14236-01 Mar. 2006—pp. 1-456.
Gupta et al., “Algorithms for Packet Classification”, Computer Systems Laboratory, Stanford University, CA Mar./Apr. 2001, pp. 1-29.
Heinz II G., “Priorities in Stream Transmission Control Protocol (SCTP) Multistreaming”, Thesis submitted to the Faculty of the University of Delaware, Spring 2003, pp. 1-35.
Internet Protocol,“Darpa Internet Program Protocol Specification”, (RFC:791), Information Sciences Institute, University of Southern California, Sep. 1981, pp. 1-49.
Ilvesmaki M., et al., “On the capabilities of application level traffic measurements to differentiate and classify Internet traffic”, Presented in SPIE's International Symposium ITcom, Aug. 19-21, 2001, pp. 1-11, Denver, Colorado.
Modiano E., “Scheduling Algorithms for Message Transmission Over a Satellite Broadcast System,” MIT Lincoln Laboratory Advanced Network Group, Nov. 1997, pp. 1-7.
Ott D., et al., “A Mechanism for TCP-Friendly Transport-level Protocol Coordination” USENIX Annual Technical Conference, 2002, University of North Carolina at Chapel Hill, pp. 1-12.
Padmanabhan V., et al., “Using Predictive Prefetching to Improve World Wide Web Latency” , SIGCOM, 1996, pp. 1-15.
Rosen E., et al., “MPLS Label Stack Encoding”, (RFC:3032) Network Working Group, Jan. 2001, pp. 1-22, (http://www.ietf.org/rfc/rfc3032.txl).
Wang B., “Priority and Realtime Data Transfer Over the Best-Effort Internet”, Dissertation Abstract, Sep. 2005, ScholarWorks@UMASS.
Woo T.Y.C., “A Modular Approach to Packet Classification: Algorithms and Results”, Nineteenth Annual Conference of the IEEE Computer and Communications Societies 3(3):1213-22, Mar. 26-30, 2000, abstract only, (http:://ieeexplore.ieee.org/xpl/freeabs—all.jsp?arnumber=832499).
U.S. Appl. No. 13/770,685, filed Feb. 19, 2013, entitled “System and Method for Achieving Hardware Acceleration for Asymmetric Flow Connections,” Inventor H. Cai.
U.S. Appl. No. 12/399,784, filed Mar. 6, 2009, entitled “Method for Managing Storage of a File System and Systems Thereof,” Inventor T. Wong.
U.S. Appl. No. 14/041,838, filed Sep. 30, 2013, entitled “Hardware Assisted Flow Acceleration and L2 SMAC Management in a Heterogeneous Distributed Multi-Tenant Virtual Clustered System,” Inventor H. Cai.
U.S. Appl. No. 14/194,277, filed Feb. 28, 2014, entitled “Device for Topology Hiding of a Visited Network,” Inventor L. Ridel.
Debnath et al. ChunkStash: Speeding up Inline Storage Deduplication using Flash Memory—dated: Oct. 2011 Microsoft Reasearch pp. 1-15.
Oracle Secure Backup Ref. Release 10.1 B14236-01, pp. 1-456 Mar. 2006.
“The AFS File System in Distributed Computing Environment,” www.transarc.ibm.com/Library/whitepapers/AFS/afsoverview.html, last accessed on Dec. 20, 2002.
Aguilera, Marcos K. et al., “Improving recoverability in multi-tier storage systems,” International Conference on Dependable Systems and Networks (DSN-2007), Jun. 2007, 10 pages, Edinburgh, Scotland.
Anderson, Darrell C. et al., “Interposed Request Routing for Scalable Network Storage,” ACM Transactions on Computer Systems 20(1): (Feb. 2002), pp. 1-24.
Anderson et al., “Serverless Network File System,” in the 15th Symposium on Operating Systems Principles, Dec. 1995, Association for Computing Machinery, Inc.
Anonymous, “How DFS Works: Remote File Systems,” Distributed File System (DFS) Technical Reference, retrieved from the Internet on Feb. 13, 2009: URL<:http://technetmicrosoft.com/en-us/library/cc782417WS.10,printer).aspx> (Mar. 2003).
Apple, Inc., “Mac OS X Tiger Keynote Intro. Part 2,” Jun. 2004, www.youtube.com <http://www.youtube.com/watch?v=zSBJwEmRJbY>, p. 1.
Apple, Inc., “Tiger Developer Overview Series: Working with Spotlight,” Nov. 23, 2004, www.apple.com using www.archive.org <http://web.archive.org/web/20041123005335/developer.apple.com/macosx/tiger/spotlight.html>, pp. 1-6.
“A Storage Architecture Guide,” Second Edition, 2001, Auspex Systems, Inc., www.auspex.com, last accessed on Dec. 30, 2002.
Basney et al., “Credential Wallets: A Classification of Credential Repositories Highlighting MyProxy,” TPRC 2003, Sep. 19-21, 2003, pp. 1-20.
Botzum, Keys, “Single Sign On—A Contrarian View,” Open Group Website, <http://www.opengroup.org/security/topics.htm>, Aug. 6, 2001, pp. 1-8.
Cabrera et al., “Swift: A Storage Architecture for Large Objects,” in Proceedings of the-Eleventh IEEE Symposium on Mass Storage Systems, Oct. 1991, pp. 123-128.
Cabrera et al., “Swift: Using Distributed Disk Striping to Provide High I/O Data Rates,” Fall 1991, pp. 405-436, vol. 4, No. 4, Computing Systems.
Cabrera et al., “Using Data Striping in a Local Area Network,” 1992, technical report No. UCSC-CRL-92-09 of the Computer & Information Sciences Department of University of California at Santa Cruz.
Callaghan et al., “NFS Version 3 Protocol Specifications” (RFC 1813), Jun. 1995, The Internet Engineering Task Force (IETN), www.ietf.org, last accessed on Dec. 30, 2002.
Carns et al., “PVFS: A Parallel File System for Linux Clusters,” in Proceedings of the Extreme Linux Track: 4th Annual Linux Showcase and Conference, Oct. 2000, pp. 317-327, Atlanta, Georgia, USENIX Association.
Cavale, M. R., “Introducing Microsoft Cluster Service (MSCS) in the Windows Server 2003”, Microsoft Corporation, Nov. 2002.
“CSA Persistent File System Technology,” A White Paper, Jan. 1, 1999, p. 1-3, http://www.cosoa.com/white—papers/pfs.php, Colorado Software Architecture, Inc.
“Distributed File System: A Logical View of Physical Storage: White Paper,” 1999, Microsoft Corp., www.microsoft.com, <http://www.eu.microsoft.com/TechNet/prodtechnol/windows2000serv/maintain/DFSnt95>, pp. 1-26, last accessed on Dec. 20, 2002.
English Translation of Notification of Reason(s) for Refusal for JP 2002-556371 (Dispatch Date: Jan. 22, 2007).
Fan et al., “Summary Cache: A Scalable Wide-Area Protocol”, Computer Communications Review, Association Machinery, New York, USA, Oct. 1998, vol. 28, Web Cache Sharing for Computing No. 4, pp. 254-265.
Farley, M., “Building Storage Networks,” Jan. 2000, McGraw Hill, ISBN 0072120509.
Gibson et al., “File Server Scaling with Network-Attached Secure Disks,” in Proceedings of the ACM International Conference on Measurement and Modeling of Computer Systems (Sigmetrics '97), Association for Computing Machinery, Inc., Jun. 15-18, 1997.
Gibson et al., “NASD Scalable Storage Systems,” Jun. 1999, USENIX99, Extreme Linux Workshop, Monterey, California.
Harrison, C., May 19, 2008 response to Communication pursuant to Article 96(2) EPC dated Nov. 9, 2007 in corresponding European patent application No. 02718824.2.
Hartman, J., “The Zebra Striped Network File System,” 1994, Ph.D. dissertation submitted in the Graduate Division of the University of California at Berkeley.
Haskin et al., “The Tiger Shark File System,” 1996, in proceedings of IEEE, Spring COMPCON, Santa Clara, CA, www.research.ibm.com, last accessed on Dec. 30, 2002.
Hu, J., Final Office action dated Sep. 21, 2007 for related U.S. Appl. No. 10/336,784.
Hu, J., Office action dated Feb. 6, 2007 for related U.S. Appl. No. 10/336,784.
Hwang et al., “Designing SSI Clusters with Hierarchical Checkpointing and Single 1/0 Space,” IEEE Concurrency, Jan.-Mar. 1999, pp. 60-69.
International Search Report for International Patent Application No. PCT/US2008/083117 (Jun. 23, 2009).
International Search Report for International Patent Application No. PCT/US2008/060449 (Apr. 9, 2008).
International Search Report for International Patent Application No. PCT/US2008/064677 (Sep. 6, 2009).
International Search Report for International Patent Application No. PCT/US02/00720, Jul. 8, 2004.
International Search Report from International Application No. PCT/US03/41202, mailed Sep. 15, 2005.
Karamanolis, C. et al., “An Architecture for Scalable and Manageable File Services,” HPL-2001-173, Jul. 26, 2001. p. 1-114.
Katsurashima, W. et al., “NAS Switch: A Novel CIFS Server Virtualization, Proceedings,” 20th IEEE/11th NASA Goddard Conference on Mass Storage Systems and Technologies, 2003 (MSST 2003), Apr. 2003.
Kimball, C.E. et al., “Automated Client-Side Integration of Distributed Application Servers,” 13Th LISA Conf., 1999, pp. 275-282 of the Proceedings.
Klayman, J., Nov. 13, 2008 e-mail to Japanese associate including instructions for response to office action dated May 26, 2008 in corresponding Japanese patent application No. 2002-556371.
Klayman, J., Response filed by Japanese associate to office action dated Jan. 22, 2007 in corresponding Japanese patent application No. 2002-556371.
Klayman, J., Jul. 18, 2007 e-mail to Japanese associate including instructions for response to office action dated Jan. 22, 2007 in corresponding Japanese patent application No. 2002-556371.
Kohl et al., “The Kerberos Network Authentication Service (V5),” RFC 1510, Sep. 1993. (http://www.ietf.org/ rfc/rfc1510.txt?number=1510).
Korkuzas, V., Communication pursuant to Article 96(2) EPC dated Sep. 11, 2007 in corresponding European patent application No. 02718824.2-2201.
Lelil, S., “Storage Technology News: AutoVirt adds tool to help data migration projects,” Feb. 25, 2011, last accessed Mar. 17, 2011, <http://searchstorage.techtarget.com/news/article/0,289142,sid5—gci1527986,00.html>.
Long et al., “Swift/RAID: A distributed RAID System”, Computing Systems, Summer 1994, vol. 7, pp. 333-359.
“NERSC Tutorials: I/O on the Cray T3E, ‘Chapter 8, Disk Striping’,” National Energy Research Scientific Computing Center (NERSC), http://hpcfnersc.gov, last accessed on Dec. 27, 2002.
Noghani et al., “A Novel Approach to Reduce Latency on the Internet: ‘Component-Based Download’,” Proceedings of the Computing, Las Vegas, NV, Jun. 2000, pp. 1-6 on the Internet: Intl Conf. on Internet.
Norton et al., “CIFS Protocol Version CIFS-Spec 0.9,” 2001, Storage Networking Industry Association (SNIA), www.snia.org, last accessed on Mar. 26, 2001.
Novotny et al., “An Online Credential Repository for the Grid: MyProxy,” 2001, pp. 1-8.
Pashalidis et al., “A Taxonomy of Single Sign-On Systems,” 2003, pp. 1-16, Royal Holloway, University of London, Egham Surray, TW20, 0EX, United Kingdom.
Pashalidis et al., “Impostor: A Single Sign-On System for Use from Untrusted Devices,” Global Telecommunications Conference, 2004, GLOBECOM '04, IEEE, Issue Date: Nov. 29-Dec. 3, 2004.Royal Holloway, University of London.
Patterson et al., “A case for redundant arrays of inexpensive disks (RAID)”, Chicago, Illinois, Jun. 1-3, 1998, in Proceedings of ACM SIGMOD conference on the Management of Data, pp. 109-116, Association for Computing Machinery, Inc., www.acm.org, last accessed on Dec. 20, 2002.
Pearson, P.K., “Fast Hashing of Variable-Length Text Strings,” Comm. of the ACM, Jun. 1990, pp. 1-4, vol. 33, No. 6.
Peterson, M., “Introducing Storage Area Networks,” Feb. 1998, InfoStor, www.infostor.com, last accessed on Dec. 20, 2002.
Preslan et al., “Scalability and Failure Recovery in a Linux Cluster File System,” in Proceedings of the 4th Annual Linux Showcase & Conference, Atlanta, Georgia, Oct. 10-14, 2000, pp. 169-180 of the Proceedings, www.usenix.org, last accessed on Dec. 20, 2002.
Response filed Jul. 6, 2007 to Office action dated Feb. 6, 2007 for related U.S. Appl. No. 10/336,784.
Response filed Mar. 20, 2008 to Final Office action dated Sep. 21, 2007 for related U.S. Appl. No. 10/336,784.
Rodriguez et al., “Parallel-access for mirror sites in the Internet,” InfoCom 2000. Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings. IEEE Tel Aviv, Israel Mar. 26-30, 2000, Piscataway, NJ, USA, IEEE, US, Mar. 26, 2000, pp. 864-873, XP010376176 ISBN: 0-7803-5880-5 p. 867, col. 2, last paragraph—p. 868, col. 1, paragraph 1.
RSYNC, “Welcome to the RSYNC Web Pages,” Retrieved from the Internet URL: http://samba.anu.edu.ut.rsync/. (Retrieved on Dec. 18, 2009).
Savage, et al., “AFRAID—A Frequently Redundant Array of Independent Disks,” Jan. 22-26, 1996, pp. 1-13, USENIX Technical Conference, San Diego, California.
“Scaling Next Generation Web Infrastructure with Content-Intelligent Switching: White Paper,” Apr. 2000, p. 1-9 Alteon Web Systems, Inc.
Soltis et al., “The Design and Performance of a Shared Disk File System for IRIX,” Mar. 23-26, 1998, pp. 1-17, Sixth NASA Goddard Space Flight Center Conference on Mass Storage and Technologies in cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems, University of Minnesota.
Soltis et al., “The Global File System,” Sep. 17-19, 1996, in Proceedings of the Fifth NASA Goddard Space Flight Center Conference on Mass Storage Systems and Technologies, College Park, Maryland.
Sorenson, K.M., “Installation and Administration: Kimberlite Cluster Version 1.1.0, Rev. Dec. 2000,” Mission Critical Linux, http://oss.missioncriticallinux.corn/kimberlite/kimberlite.pdf.
Stakutis, C., “Benefits of SAN-based file system sharing,” Jul. 2000, pp. 1-4, InfoStor, www.infostor.com, last accessed on Dec. 30, 2002.
Thekkath et al., “Frangipani: A Scalable Distributed File System,” in Proceedings of the 16th ACM Symposium on Operating Systems Principles, Oct. 1997, pp. 1-14, Association for Computing Machinery, Inc.
Tulloch, Mitch, “Microsoft Encyclopedia of Security,” 2003, pp. 218, 300-301, Microsoft Press, Redmond, Washington.
Uesugi, H., Nov. 26, 2008 amendment filed by Japanese associate in response to office action dated May 26, 2008 in corresponding Japanese patent application No. 2002-556371.
Uesugi, H., English translation of office action dated May 26, 2008 in corresponding Japanese patent application No. 2002-556371.
Uesugi, H., Jul. 15, 2008 letter from Japanese associate reporting office action dated May 26, 2008 in corresponding Japanese patent application No. 2002-556371.
“VERITAS SANPoint Foundation Suite(tm) and SANPoint Foundation Suite(tm) HA: New VERITAS Volume Management and File System Technology for Cluster Environments,” Sep. 2001, VERITAS Software Corp.
Wilkes, J., et al., “The HP AutoRAID Hierarchical Storage System,” Feb. 1996, vol. 14, No. 1, ACM Transactions on Computer Systems.
“Windows Clustering Technologies—An Overview,” Nov. 2001, Microsoft Corp., www.microsoft.com, last accessed on Dec. 30, 2002.
Zayas, E., “AFS-3 Programmer's Reference: Architectural Overview,” Transarc Corp., version 1.0 of Sep. 2, 1991, doc. No. FS-00-D160.
Provisional Applications (1)
Number Date Country
61393194 Oct 2010 US