This application relates to U.S. patent application Ser. No. 12/895,275, entitled “SYSTEMS AND METHODS FOR RESTORING A FILE,” filed on Sep. 30, 2010, which is hereby incorporated by reference herein in its entirety, including all references cited therein.
The present invention relates generally to systems and methods for maintaining a virtual failover volume of a target computing system, and more specifically, but not by way of limitation, to systems and methods for maintaining a virtual failover volume of a target computing system that may be utilized by a virtual machine to create a virtual failover computing system that approximates the configuration of the target computing system, upon the occurrence of a failover event.
Generally speaking, the systems and methods provided herein may be adapted to maintain a “ready to execute” virtual failover volume of a target computing system. The virtual failover system may be executed by a virtual machine to assume the functionality target computing system upon the occurrence of a failover event.
The systems and methods may maintain the virtual failover volume in a “ready to execute: state by periodically revising a mirror of the target computing system and store the periodically revised mirror in the virtual failover volume. The ability of the systems and methods to periodically revise the mirror of the target computing system ensures that upon the occurrence of a failover event, a virtual machine may execute the periodically revised mirror to create a virtual failover computing system that may assume the configuration of the target computing system without substantial delay.
According to exemplary embodiments, the present invention provides for a method for maintaining a virtual failover volume of a target computing system that includes: (a) periodically revising a mirror of the target computing system, according to a predetermined backup schedule, the mirror being stored on the virtual failover volume resident on an appliance that is operatively associated with the target computing system, by: (i) periodically comparing the mirror to a configuration of the target computing system to determine changed data blocks relative to the mirror; (ii) storing the changed data blocks as one or more differential files in the virtual failover volume, the one or more differential files being stored separately from the mirror; and (iii) incorporating the changed data blocks into the mirror; (b) upon the occurrence of a failover event, creating a bootable image file from at least one of the mirror and one or more differential files; and (c) booting the bootable image file via a virtual machine on the appliance to create a virtual failover computing system that substantially corresponds to the target computing system at an arbitrary point in time.
According to other embodiments, systems for maintaining a virtual failover volume of a target computing system may include: (a) a memory for storing computer readable instructions for maintaining a virtual failover volume of a file structure of a target computing system; and (b) a processor configured to execute the instructions stored in the memory to: (i) periodically revise a mirror of the target computing system, according to a predetermined backup schedule, the mirror being stored on the virtual failover volume resident on an appliance that is operatively associated with the target computing system, by: periodically compare the mirror to a configuration of the target computing system to determine changed data blocks relative to the mirror; store the changed data blocks as one or more differential files in the virtual failover volume, the one or more differential files being stored separately from the mirror; and incorporate the changed data blocks into the mirror; (ii) upon the occurrence of a failover event, create a bootable image file from at least one of the mirror and one or more differential files; and (iii) boot the bootable image file via a virtual machine on the appliance to create a virtual failover computing system that substantially corresponds to the target computing system at an arbitrary point in time.
In some embodiments, the present technology may be directed to non-transitory computer readable storage mediums. The storage medium may each have a computer program embodied thereon, the computer program executable by a processor in a computing system to perform a method for maintaining a virtual failover volume of a target computing system that includes: (a) periodically revising a mirror of the target computing system, according to a predetermined backup schedule, the mirror being stored on the virtual failover volume resident on an appliance that is operatively associated with the target computing system, by: (i) periodically comparing the mirror to a configuration of the target computing system to determine changed data blocks relative to the mirror; (ii) storing the changed data blocks as one or more differential files in the virtual failover volume, the one or more differential files being stored separately from the mirror; and (iii) incorporating the changed data blocks into the mirror; (b) upon the occurrence of a failover event, creating a bootable image file from at least one of the mirror and one or more differential files; and (c) booting the bootable image file via a virtual machine on the appliance to create a virtual failover computing system that substantially corresponds to the target computing system at an arbitrary point in time.
While this invention is susceptible of embodiment in many different forms, there is shown in the drawings and will herein be described in detail several specific embodiments with the understanding that the present disclosure is to be considered as an exemplification of the principles of the invention and is not intended to limit the invention to the embodiments illustrated.
Virtual failover volumes may often be utilized as redundancy mechanisms for backing up one or more target computing systems in case of failover events (e.g., minor or major failures or abnormal terminations of the target computing systems). The virtual failover volume may include an approximate copy of a configuration of the target computing system. In some embodiments, the configuration of the target computing system may include files stored on one or more hard drives, along with configuration information of the target computing system such as Internet protocol (IP) addresses, media access control (MAC) addresses, and the like. The configuration of the target computing system may additionally include other types of data that may be utilized by a virtual machine to create a virtual failover computing system that closely approximates the configuration of the target computing system.
When backing up the target computing system, the configuration of the target computing system may be transferred to a virtual failover volume according to a backup schedule.
According to some embodiments, methods for backing up the target computing system may include capturing a mirror (also known as a snapshot) of the target computing system. To save space on the virtual failover volume, rather than capturing subsequent mirrors, the systems and methods may capture differential files indicative of changes to the target computing system since the creation of the snapshot, or since the creation of a previous differential file. The differential files may be utilized to update or “revise” the mirror.
It will be understood that because of the relatively small size of differential files relative to the mirror, significant space may be save on the virtual failover volume relative to capturing multiple mirrors. It is noteworthy that differential files may also be known as incremental files, delta files, delta increments, differential delta increments, reverse delta increments, and other permutations of the same.
It will be understood that exemplary methods for creating mirrors and differential files of target computing systems are provided in greater detail with regard to U.S. patent application Ser. No. 12/895,275, entitled “SYSTEMS AND METHODS FOR RESTORING A FILE,” filed on Sep. 30, 2010, which is hereby incorporated by reference herein in its entirety, including all references cited therein.
The systems and methods may capture the mirror of the target computing system and store the data blocks of the mirror in a virtual failover volume as a bootable image file by creating a substantially identical copy of the file structure of the target computing system at a given point in time.
As stated above, rather than capturing additional mirrors of the target storage volume, the systems and methods may capture one or more differential files indicative of changes to the target computing system at one or more points in time after the rendering of the mirror. These changes may be stored in files separate from the mirror and may be retained on the virtual failover volume for a predetermined period of time. The systems and methods may be able to utilize these differential files to walk backwards in time to recreate a virtual failover computing system indicative of the configuration of the target computing system at an arbitrary point in time in the past. It will be understood that the further back in time the systems and methods must go to recreate a virtual failover computing system of the target computing system, the longer the process becomes to launch the virtual failover computing system.
If a failover event occurs before the systems and methods have updated the mirror utilizing one or more differential files, the systems and methods may boot the bootable image file and additionally read changed blocks from one or more differential files on the fly by way of a copy on write functionality to create a virtual failover computing system (e.g., rendering a mirror of the target computing system) that approximates the configuration of the target computing system.
Additionally, because the systems and methods of the present technology may utilize a virtual failover volume having new technology file system (NTFS) file system, the systems and methods may be adapted to modify the allocation strategy utilized by the NTFS file structure to more efficiently utilize the virtual storage volume.
Referring now to the drawings,
According to some embodiments, each appliance 110 may be associated with a remote storage medium 125 that facilitates long-term storage of at least a portion of the data (e.g., differential files) from the appliances 110 in one or more virtual failover volumes 130.
Generally speaking, the appliance 110 provides local backup services for maintaining a virtual failover volume of the target computing system 105 associated therewith. That is, the appliance 110 may capture a mirror indicative of the target computing system 105 (e.g., storage mediums, configuration information, etc.) and periodically capture differential files indicative of changes to the target computing system 105 relative to the mirror. Upon the occurrence of a failover event (e.g., full or partial failure or malfunction of the target computing system), the appliance 110 may boot the virtual failover volume in a virtual machine as a virtual failover computing system that approximates the target computing system 105 at an arbitrary point in time.
The appliance 110 may include computer readable instructions that, when executed by a processor of the appliance 110, are adapted to maintain a virtual failover volume of the target computing system 105 associated therewith.
According to some exemplary embodiments, both the target computing system 105 and the appliance 110 may be generally referred to as “a computing system” such as a computing system 500 as disclosed with respect to
Referring now to
According to some embodiments, the application 200 may generally include a disk maintenance module 205, an obtain mirror module 210, an analysis module 215, a revise mirror module 220, a render mirror module 225, a resparsification module 230, and a virtual machine 235. It is noteworthy that the application 200 may be composed of more or fewer modules and engines (or combinations of the same) and still fall within the scope of the present technology.
The disk maintenance module 205 may be adapted to create a virtual failover volume 130 on the appliance 110. According to some embodiments, the disk maintenance module 205 may allocate two terabytes of space for the virtual failover volume 130 for each drive associated with the target computing system 105. In some applications, the disk maintenance module 205 may be adapted to mount the virtual failover volume 130 and format the virtual failover volume 130 utilizing a new technology file system (NTFS) file system. While the disk maintenance module 205 has been disclosed as allocating and formatting a two terabyte virtual failover volume 130 utilizing a NTFS file system, other sizes and formatting procedures that would be known to one of ordinary skill in the art may likewise be utilized in accordance with the present technology.
In some embodiments, the virtual failover volume 130 may include a sparse file. Generally speaking, a sparse file may include a sparse file structure that is adapted to hold, for example, two terabytes worth of data. In practice, while two terabytes worth of space has been allocated, only a portion of the virtual failover volume 130 may actually be filled with data blocks. The rest of the data blocks of the virtual failover volume 130 may be empty or “free,” in that they include no actual data other than metadata that may inform the NTFS file system that the blocks are available for writing. When read by the NTFS file system, the NTFS file system may transparently convert metadata representing empty blocks into free blocks filled with zero bytes at runtime.
Referring now to
In addition to the backing store 135, the virtual failover volume 130 may include additional storage space for one or more differential files in a differential block store 140. For example, the differential block store 140 may include differential files 140b, 140d, and 140f that are indicative of changes to one or more files of the target computing system 105 relative to the backing store 135.
It will be understood that the differential block store 140 may be stored separately from the backing store 135 on the virtual failover volume 130, along with sufficient working space to accommodate a copy of the set of differential files created during subsequent backups of the target computing system 105. Moreover, the virtual failover volume 130 may also include additional operating space (not shown) for the virtual machine 235 to operate at a reasonable level (e.g., files created or modified by the virtual failover computing system) for a given period of time, which in some cases is approximately one month.
It will be understood that because direct modification of the backing store 135 via the virtual machine 235 may lead to corruption of the backing store 135, the differential files may be stored separately from the backing store 135 in the differential block store 140. Therefore, the analysis module 215 may be adapted to utilize a copy on write functionality to store differential files separately from the backing store 135. An exemplary “write” operation 145 illustrates a differential file 140f being written into the differential block store 140.
In some applications, changed data blocks included in the one or more differential files may be incorporated into the backing store 135 via the revise mirror module 220, as will be discussed in greater detail below. However, it will be understood that once the virtual machine 235 has booted the bootable image file of the virtual failover volume 130, the application 200 may read (rather than directly open) data blocks from the backing store 135 and the one or more differential files independently from one another, utilizing a copy on write functionality. Utilization of the copy on write functionality may prevent changes to the backing store 135 that may occur if the backing store 135 is opened by the NTFS file system. It is noteworthy that directly opening the backing store 135 may modify the backing store 135 and compromise the integrity of the backing store 135.
Upon an initial preparation of the virtual failover volume 130 by the disk maintenance module 205, each of the blocks of the backing store 135 is a “free” or sparse block such that the obtain mirror module 210 may move or “copy” the blocks of data from the target computing system 105 to the sparse blocks of the backing store 135. Exemplary empty or “free” blocks of the backing store 135 are shown as free blocks 150. Moreover, the backing store 135 may include occupied blocks such as 135a and 135e indicative of data blocks copied from the target computing system 105.
As stated above, the obtain mirror module 210 may be executed to copy data blocks from the target computing system 105 into the backing store 135 to occupy at least a portion of the free blocks 150 to create a mirror or “snapshot” of the target computing system 105. It will be understood that the backing store 135 may be stored as a bootable image file, such as a Windows® root file system, that may be executed by the virtual machine 235. In some embodiments, the virtual machine 235 may utilize a corresponding Windows® operating system to boot the bootable image file.
The analysis module 215 may be executed periodically (typically according to a backup schedule) to determine the changed data blocks of the target computing device 105 relative to the data blocks of the backing store 135. The determined changed data blocks may be stored in the differential block store 140 as one or more differential files. In some embodiments, each execution of the analysis module 215 that determines changed blocks results in the creation of a separate differential file.
Changed blocks stored in the differential block store 140 that are obtained by the analysis module 215 may be utilized by the revise mirror module 220 to revise the mirror (e.g., backing store 135) of the target computing system 105. It will be understood that the process of revising the mirror may occur according to a predetermined backup schedule.
Upon the occurrence of a failover event, the render mirror module 225 may utilize the mirror alone, or the mirror and the revised differential file, to render a bootable image file from one or more mirrors, and/or one or more mirrors and one or more differential files to create a virtual failover computing system that approximates the configuration of the target computing system 105 at an arbitrary point in time. In contrast to backup methods that store data blocks to a backup storage medium in an unorganized (e.g., not substantially corresponding to a root file system of the target computing system) manner, the backup methods utilized by the appliance 110 (e.g., the mirror and differential files are stored in a virtual failover volume 130) allow for the quick and efficient rendering of bootable disk images.
It will be understood that these bootable disk images may be utilized by the virtual machine 235 to launch a virtual failover computing system that approximates the configuration of the target computing system 105 at an arbitrary point in time without substantial delay caused by copying all of (or even a substantial portion) the backed-up data blocks from an unorganized state to a bootable image file that approximates the root file system of the target computing system upon the occurrence of the failover event.
According to some embodiments, to facilitate rapid failover to the virtual machine 235, the application 200 may be adapted to utilize a revisable differential file. As such, the analysis module 215 may be adapted to periodically update a revisable differential file. In some embodiments, the analysis module 215 may update the revisable differential file by comparing the revisable differential file to the current configuration of the target computing system to determine changed data blocks relative to the revisable differential file. Next, the analysis module 215 may combine the determined changed data blocks into the revisable differential file to create an updated differential file that takes the place of the revisable differential file. Moreover, rather than discarding the revisable differential file, it may be stored in a differential file archive located on at least one of the remote storage device 125 of the virtual failover volume 130.
As such, the virtual failover volume 130 may be kept in a “ready to execute” format such that upon the occurrence of a failover event, the render mirror module 225 may be executed to render the mirror and the revisable differential file to create a bootable image file that is utilized by the virtual machine 235 to establish a virtual failover computing system that substantially corresponds to the configuration of the target computing system 105 as it existed right before the occurrence of the failover event.
During operation of the virtual machine 235, if the virtual machine 235 reads a file from the virtual failover volume 130, the virtual machine 235 may utilize data blocks from the differential block store 140, in addition to data blocks from the backing store 135. The virtual machine 235 may utilize copy on write functionalities to obtain data blocks from the backing store 135 along with data blocks from the differential block store 140 that are situated temporally between the mirror and an arbitrary point in time. The combination of the data blocks allows the virtual machine 235 to recreate the file approximately as it appeared on the target computing system 105 at the arbitrary point in time.
With particular emphasis on
In addition to launching the virtual machine 235 to create a virtual failover computing system that approximates configuration of the target computing system 105, the virtual failover computing system may utilize additional configuration details of the target computing system 105, such as a media access control (MAC) address, an Internet protocol (IP) address, or other suitable information indicative of the location or identification of the target computing system 105. The virtual machine 235 may also update registry entries or perform any other necessary startup operations such that the virtual failover computing system may function substantially similarly to the target computing system 105.
During operation, the virtual machine 235 may also create, delete, and modify files just as the target computing system 105 would, although changed data blocks indicative of the modify files may be stored in the additional operating space created in the virtual failover volume 130. Moreover, data blocks may be deleted from the virtual failover volume 130.
Because the virtual failover volume 130 may utilize NTFS file system, allocation strategies may cause the virtual machine 235 to overlook deleted blocks that have not been converted to free blocks by the NTFS file system. For example, modifications to the backing store 135 by the revise mirror module 220 and routine deletion of differential files from the differential data store 140 may result in deleted blocks. It will be understood that a deleted block is a data block that has been marked for deletion by the NTFS file system, but that still retains a portion of the deleted data block.
Allocation strategies of the NTFS file system may cause data blocks that are being written into the virtual failover volume 130 to be written into the next available free block(s), leading to desparsification. To counteract this desparsification, the resparsification module 230 may be adapted to resparsify the virtual failover volume 130. In some embodiments, the NTFS file system may notify the underlying XFS file system of the appliance 110 (which holds the backing store 135), to resparsify the one or more deleted blocks, returning them to the sparse state.
The resparsification module 230 may be adapted to perform a resparsification operation 305b on the backing store 300. For example, resparsification module 230 may be adapted to cause the NTFS file system to notify the underlying XFS file system of the appliance 110 (which holds the backing store 135), to resparsify the deleted blocks 310b, 310d, and 310e. As such, data may be written to the resparsified blocks 310b, 310d, 310e, desparsifying only one data block 310f.
While the resparsification module 230 has been disclosed within the context of maintaining virtual failover volumes of target computing systems, the applicability of resparsifying a virtual volume may extend to other systems and methods that would benefit from the ability of one or more virtual machines to efficiently store data on a virtual volume.
In additional embodiments, the application 200 may be adapted to optimize the virtual failover volume 130 by limiting metadata updates to the virtual failover volume 130 by the NTFS file system.
The backing store 135 may be utilized by the virtual machine 235 as a file (e.g., bootable disk image) that may be stored in an XFS file system on the appliance 110. Whenever data blocks are written to the appliance 110, the XFS file system may commit a metadata change to record an “mtime” (e.g., modification time) of a data block. Moreover, because some virtual machines 230 may utilize a semi-synchronous NTFS file system within the backing store 135, a very high quantity of 512 byte clusters may be written, each invoking a metadata update to the virtual machine XFS file system. These metadata updates cause the virtual machine XFS file system to serialize inputs and outputs behind transactional journal commits, causing deleterious performance of the virtual machine 235.
To alleviate the ‘mtime’ updates, the virtual machine 235 may be adapted to open files (comprised of data blocks or differential data blocks) using a virtual machine XFS internal kernel call that may open a file by way of a handle. The virtual machine 235 may use this method for both backup and restore functionality. Therefore, a file opened utilizing this method may allow the virtual machine 235 to omit “mtime” updates, thereby reducing journal commits and significantly improving write performance.
Moreover the virtual machine 235 may utilize memory efficient data block locking functionalities for asynchronous input and output actions. That is, the functionality utilized by the virtual machine 235 may be inefficient at memory utilization, especially when locking data blocks during asynchronous input and/or output actions.
Therefore, the virtual machines 230 utilized in accordance with the present technology may be adapted to utilize alternate lock management systems which create locks as needed and store the locked data blocks in a ‘splay-tree’ while active. The splay tree is a standard data structure that provides the virtual machine 235 with the ability to rapidly lookup of node while modifying the root node to be near recently accessed data blocks. By storing only needed data locks in a small, fast, splay tree, the memory footprint of the appliance 110 may be reduced without an associated compromise of lookup speed. It will be understood that large virtual failover volumes may be access using this method.
According to some embodiments, upon repair of the target computing system 105, also known as a “bare metal restore,” the virtual machine 235 may be paused or “locked.” Pausing the virtual machine 235 preserves the state of the virtual failover volume 130. Moreover, the paused virtual failover volume 130 may be copied directly to the repaired target computing system allowing for a virtual to physical conversion of the virtual failover volume 130 to the target storage medium 120 of the repaired target computing device.
Upon the occurrence of the virtual to physical operation, the bootable image file created from the virtual failover volume 130 may be discarded and the virtual failover volume 130 may be returned to a data state that approximates the data state of the virtual failover volume 130 before the bootable image file was created by the obtain mirror module 210.
The virtual failover volume 130 may then be reutilized with the repaired target computing system.
Referring now to
In some embodiments, the method 400 may include the step 415 of periodically obtaining a mirror of a target computing system on a virtual failover volume as a bootable image file. The step 415 may include periodically comparing the mirror to a configuration of the target computing system to determine changed data blocks relative to the mirror, and storing the changed data blocks as one or more differential files in the virtual failover volume, the one or more differential files being stored separately from the mirror. It will be understood that the periodically obtained mirrors may be stored on a virtual volume as a Windows® root file system.
Next, the method 400 may include the step 420 of receiving information indicative of a failover event (e.g., failure of the target computing system). Upon information indicative of a failover event, the method 400 may include the step 425 of rendering a bootable image file from the mirror that has been periodically revised.
Next, the method 400 may include the step 430 of booting the bootable image file via a virtual machine to create a virtual failover computing system. It will be understood that the configuration of the virtual failover computing system may closely approximate the configuration of the target computing system at the failover event.
In some embodiments, the method 400 may include an optional step 435 of rendering a bootable image file that approximates the configuration of the target computing system at an arbitrary point in time utilizing one or more mirrors and one or more differential files, rather than only utilizing the mirror. The step 435 may include walking the mirror back in time utilizing the one or more differential files to recreate the configuration of the target computing system as it was at the arbitrary point in time.
The components shown in
Mass storage device 530, which may be implemented with a magnetic disk drive or an optical disk drive, is a non-volatile storage device for storing data and instructions for use by processor unit 510. Mass storage device 530 may store the system software for implementing embodiments of the present invention for purposes of loading that software into main memory 520.
Portable storage device 540 operates in conjunction with a portable non-volatile storage medium, such as a floppy disk, compact disk, digital video disc, or USB storage device, to input and output data and code to and from the computer system 500 of
Input devices 560 provide a portion of a user interface. Input devices 560 may include an alphanumeric keypad, such as a keyboard, for inputting alpha-numeric and other information, or a pointing device, such as a mouse, a trackball, stylus, or cursor direction keys. Additionally, the system 500 as shown in
Display system 570 may include a liquid crystal display (LCD) or other suitable display device. Display system 570 receives textual and graphical information, and processes the information for output to the display device.
Peripherals 580 may include any type of computer support device to add additional functionality to the computer system. Peripheral device(s) 580 may include a modem or a router.
The components provided in the computer system 500 of
It is noteworthy that any hardware platform suitable for performing the processing described herein is suitable for use with the technology. Computer-readable storage media refer to any medium or media that participate in providing instructions to a central processing unit (CPU), a processor, a microcontroller, or the like. Such media may take forms including, but not limited to, non-volatile and volatile media such as optical or magnetic disks and dynamic memory, respectively. Common forms of computer-readable storage media include a floppy disk, a flexible disk, a hard disk, magnetic tape, any other magnetic storage medium, a CD-ROM disk, digital video disk (DVD), any other optical storage medium, RAM, PROM, EPROM, a FLASHEPROM, any other memory chip or cartridge.
While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. The descriptions are not intended to limit the scope of the technology to the particular forms set forth herein. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described exemplary embodiments. It should be understood that the above description is illustrative and not restrictive. To the contrary, the present descriptions are intended to cover such alternatives, modifications, and equivalents as may be included within the spirit and scope of the technology as defined by the appended claims and otherwise appreciated by one of ordinary skill in the art. The scope of the technology should, therefore, be determined not with reference to the above description, but instead should be determined with reference to the appended claims along with their full scope of equivalents.
Number | Name | Date | Kind |
---|---|---|---|
5574905 | deCarmo | Nov 1996 | A |
6122629 | Walker et al. | Sep 2000 | A |
6205527 | Goshey | Mar 2001 | B1 |
6233589 | Balcha et al. | May 2001 | B1 |
6272492 | Kay | Aug 2001 | B1 |
6411985 | Fujita et al. | Jun 2002 | B1 |
6604236 | Draper et al. | Aug 2003 | B1 |
6629110 | Cane et al. | Sep 2003 | B2 |
6651075 | Kusters et al. | Nov 2003 | B1 |
6971018 | Witt et al. | Nov 2005 | B1 |
7024581 | Wang | Apr 2006 | B1 |
7085904 | Mizuno et al. | Aug 2006 | B2 |
7266655 | Escabi, II et al. | Sep 2007 | B1 |
7401192 | Stakutis et al. | Jul 2008 | B2 |
7406488 | Stager et al. | Jul 2008 | B2 |
7546323 | Timmins et al. | Jun 2009 | B1 |
7620765 | Ohr et al. | Nov 2009 | B1 |
7647338 | Lazier et al. | Jan 2010 | B2 |
7676763 | Rummel | Mar 2010 | B2 |
7730425 | de los Reyes et al. | Jun 2010 | B2 |
7743038 | Goldick | Jun 2010 | B1 |
7752487 | Feeser et al. | Jul 2010 | B1 |
7769731 | O'Brien | Aug 2010 | B2 |
7797582 | Stager et al. | Sep 2010 | B1 |
7809688 | Cisler et al. | Oct 2010 | B2 |
7832008 | Kraemer | Nov 2010 | B1 |
7844850 | Yasuzato | Nov 2010 | B2 |
7873601 | Kushwah | Jan 2011 | B1 |
7930275 | Chen et al. | Apr 2011 | B2 |
7966293 | Owara et al. | Jun 2011 | B1 |
8037345 | Iyer et al. | Oct 2011 | B1 |
8046632 | Miwa et al. | Oct 2011 | B2 |
8060476 | Afonso et al. | Nov 2011 | B1 |
8099391 | Monckton | Jan 2012 | B1 |
8099572 | Arora | Jan 2012 | B1 |
8117163 | Brown et al. | Feb 2012 | B2 |
8200926 | Stringham | Jun 2012 | B1 |
8224935 | Bandopadhyay et al. | Jul 2012 | B1 |
8244914 | Nagarkar | Aug 2012 | B1 |
8245156 | Mouilleseaux et al. | Aug 2012 | B2 |
8260742 | Cognigni et al. | Sep 2012 | B2 |
8279174 | Jee et al. | Oct 2012 | B2 |
8296410 | Myhill et al. | Oct 2012 | B1 |
8321688 | Auradkar et al. | Nov 2012 | B2 |
8332442 | Greene | Dec 2012 | B1 |
8352717 | Campbell et al. | Jan 2013 | B2 |
8381133 | Iwema et al. | Feb 2013 | B2 |
8402087 | O'Shea et al. | Mar 2013 | B2 |
8407190 | Prahlad et al. | Mar 2013 | B2 |
8412680 | Gokhale et al. | Apr 2013 | B1 |
8504785 | Clifford et al. | Aug 2013 | B1 |
8549432 | Warner | Oct 2013 | B2 |
8572337 | Gokhale et al. | Oct 2013 | B1 |
8589350 | Lalonde et al. | Nov 2013 | B1 |
8589913 | Jelvis et al. | Nov 2013 | B2 |
8600947 | Freiheit et al. | Dec 2013 | B1 |
8601389 | Schulz et al. | Dec 2013 | B2 |
8606752 | Beatty et al. | Dec 2013 | B1 |
8639917 | Ben-Shaul et al. | Jan 2014 | B1 |
8676273 | Fujisaki | Mar 2014 | B1 |
8886611 | Caputo | Nov 2014 | B2 |
8924360 | Caputo | Dec 2014 | B1 |
8954544 | Edwards | Feb 2015 | B2 |
9104621 | Caputo | Aug 2015 | B1 |
20010034737 | Cane | Oct 2001 | A1 |
20010056503 | Hibbard | Dec 2001 | A1 |
20020169740 | Korn | Nov 2002 | A1 |
20030011638 | Chung | Jan 2003 | A1 |
20030158873 | Sawdon et al. | Aug 2003 | A1 |
20030208492 | Winiger et al. | Nov 2003 | A1 |
20040044707 | Richard | Mar 2004 | A1 |
20040073560 | Edwards | Apr 2004 | A1 |
20040093474 | Lin et al. | May 2004 | A1 |
20040233924 | Bilak et al. | Nov 2004 | A1 |
20040260973 | Michelman | Dec 2004 | A1 |
20050010835 | Childs | Jan 2005 | A1 |
20050027748 | Kisley | Feb 2005 | A1 |
20050154937 | Achiwa | Jul 2005 | A1 |
20050171979 | Stager et al. | Aug 2005 | A1 |
20050223043 | Randal et al. | Oct 2005 | A1 |
20050278583 | Lennert et al. | Dec 2005 | A1 |
20050278647 | Leavitt et al. | Dec 2005 | A1 |
20060013462 | Sadikali | Jan 2006 | A1 |
20060047720 | Kulkarni et al. | Mar 2006 | A1 |
20060064416 | Sim-Tang | Mar 2006 | A1 |
20060224636 | Kathuria et al. | Oct 2006 | A1 |
20070033301 | Aloni et al. | Feb 2007 | A1 |
20070112895 | Ahrens et al. | May 2007 | A1 |
20070176898 | Suh | Aug 2007 | A1 |
20070180207 | Garfinkle | Aug 2007 | A1 |
20070204166 | Tome et al. | Aug 2007 | A1 |
20070208918 | Harbin et al. | Sep 2007 | A1 |
20070220029 | Jones et al. | Sep 2007 | A1 |
20070226400 | Tsukazaki | Sep 2007 | A1 |
20070233699 | Taniguchi et al. | Oct 2007 | A1 |
20070250302 | Xu | Oct 2007 | A1 |
20070260842 | Faibish et al. | Nov 2007 | A1 |
20070276916 | McLoughlin | Nov 2007 | A1 |
20070283017 | Anand et al. | Dec 2007 | A1 |
20070283343 | Aridor et al. | Dec 2007 | A1 |
20070288525 | Stakutis et al. | Dec 2007 | A1 |
20070288533 | Srivastava et al. | Dec 2007 | A1 |
20070294321 | Midgley et al. | Dec 2007 | A1 |
20080005468 | Faibish et al. | Jan 2008 | A1 |
20080010422 | Suzuki | Jan 2008 | A1 |
20080027998 | Hara | Jan 2008 | A1 |
20080036743 | Westerman et al. | Feb 2008 | A1 |
20080082310 | Sandorfi et al. | Apr 2008 | A1 |
20080141018 | Tanaka et al. | Jun 2008 | A1 |
20080162590 | Kundu et al. | Jul 2008 | A1 |
20080162607 | Torii et al. | Jul 2008 | A1 |
20080201315 | Lazier et al. | Aug 2008 | A1 |
20080229050 | Tillgren | Sep 2008 | A1 |
20080307345 | Hart et al. | Dec 2008 | A1 |
20080307527 | Kaczmarski et al. | Dec 2008 | A1 |
20090164527 | Spektor et al. | Jun 2009 | A1 |
20090185500 | Mower et al. | Jul 2009 | A1 |
20090216973 | Nakajima et al. | Aug 2009 | A1 |
20090309849 | Iwema et al. | Dec 2009 | A1 |
20090319653 | Lorenz et al. | Dec 2009 | A1 |
20090327964 | Mouilleseaux et al. | Dec 2009 | A1 |
20100077165 | Lu | Mar 2010 | A1 |
20100095077 | Lockwood | Apr 2010 | A1 |
20100104105 | Schmidt et al. | Apr 2010 | A1 |
20100107155 | Banerjee et al. | Apr 2010 | A1 |
20100114832 | Lillibridge et al. | May 2010 | A1 |
20100165947 | Taniuchi et al. | Jul 2010 | A1 |
20100179973 | Carruzzo | Jul 2010 | A1 |
20100192103 | Cragun et al. | Jul 2010 | A1 |
20100205152 | Ansari et al. | Aug 2010 | A1 |
20100228999 | Maheshwari et al. | Sep 2010 | A1 |
20100235831 | Dittmer | Sep 2010 | A1 |
20100262637 | Akagawa | Oct 2010 | A1 |
20100268689 | Gates et al. | Oct 2010 | A1 |
20100318748 | Ko et al. | Dec 2010 | A1 |
20100325377 | Lango et al. | Dec 2010 | A1 |
20100332454 | Prahlad | Dec 2010 | A1 |
20110041004 | Miwa et al. | Feb 2011 | A1 |
20110047405 | Marowsky-Bree et al. | Feb 2011 | A1 |
20110055399 | Tung et al. | Mar 2011 | A1 |
20110055471 | Thatcher et al. | Mar 2011 | A1 |
20110055500 | Sasson et al. | Mar 2011 | A1 |
20110082998 | Boldy et al. | Apr 2011 | A1 |
20110106768 | Khanzode et al. | May 2011 | A1 |
20110154268 | Trent, Jr. et al. | Jun 2011 | A1 |
20110218966 | Barnes | Sep 2011 | A1 |
20110238937 | Murotani et al. | Sep 2011 | A1 |
20110264785 | Newman et al. | Oct 2011 | A1 |
20110265143 | Grube et al. | Oct 2011 | A1 |
20110302502 | Hart et al. | Dec 2011 | A1 |
20120013540 | Hogan | Jan 2012 | A1 |
20120065802 | Seeber et al. | Mar 2012 | A1 |
20120084501 | Watanabe et al. | Apr 2012 | A1 |
20120124307 | Ashutosh et al. | May 2012 | A1 |
20120130956 | Caputo | May 2012 | A1 |
20120131235 | Nageshappa et al. | May 2012 | A1 |
20120179655 | Beatty et al. | Jul 2012 | A1 |
20120204060 | Swift et al. | Aug 2012 | A1 |
20120210398 | Triantafillos et al. | Aug 2012 | A1 |
20130018946 | Brown et al. | Jan 2013 | A1 |
20130024426 | Flowers et al. | Jan 2013 | A1 |
20130036095 | Titchener et al. | Feb 2013 | A1 |
20130091183 | Edwards et al. | Apr 2013 | A1 |
20130091471 | Gutt et al. | Apr 2013 | A1 |
20130166511 | Ghatty et al. | Jun 2013 | A1 |
20130238752 | Park et al. | Sep 2013 | A1 |
20130318046 | Clifford et al. | Nov 2013 | A1 |
20140006858 | Helfman et al. | Jan 2014 | A1 |
20140032498 | Lalonde et al. | Jan 2014 | A1 |
20140047081 | Edwards | Feb 2014 | A1 |
20140053022 | Forgette et al. | Feb 2014 | A1 |
20140089619 | Khanna et al. | Mar 2014 | A1 |
20140149358 | Aphale et al. | May 2014 | A1 |
20140189680 | Kripalani | Jul 2014 | A1 |
20140303961 | Leydon et al. | Oct 2014 | A1 |
20150046404 | Caputo | Feb 2015 | A1 |
20150095691 | Edwards | Apr 2015 | A1 |
Entry |
---|
Caputo, “Systems and Methods for Restoring a File”, U.S. Appl. No. 12/895,275, filed Sep. 30, 2010. |
Notice of Allowance, mailed Sep. 12, 2013, U.S. Appl. No. 13/437,738, filed Apr. 2, 2012. |
Office Action, mailed Apr. 10, 2014, U.S. Appl. No. 13/570,161, filed Aug. 8, 2012. |
Notice of Allowance, mailed Sep. 26, 2014, U.S. Appl. No. 12/895,275, filed Sep. 30, 2010. |
Notice of Allowance, mailed Sep. 15, 2014, U.S. Appl. No. 13/363,234, filed Jan. 31, 2012. |
Notice of Allowance, mailed Oct. 20, 2014, U.S. Appl. No. 13/570,161, filed Aug. 8, 2012. |
Non-Final Office Action, mailed Jul. 28, 2014, U.S. Appl. No. 13/671,498, filed Nov. 7, 2012. |
Final Office Action, mailed May 20, 2014, U.S. Appl. No. 13/633,695, filed Oct. 2, 2012. |
Corrected Notice of Allowability, mailed Nov. 3, 2014, U.S. Appl. No. 13/570,161, filed Aug. 8, 2012. |
Corrected Notice of Allowability, mailed Dec. 30, 2014, U.S. Appl. No. 13/570,161, filed Aug. 8, 2012. |
Non-Final Office Action, mailed Nov. 5, 2014, U.S. Appl. No. 13/789,578, filed Mar. 7, 2013. |
Non-Final Office Action, mailed Nov. 12, 2014, U.S. Appl. No. 14/037,231, filed Sep. 25, 2013. |
Final Office Action, mailed Feb. 24, 2015, U.S. Appl. No. 13/671,498, filed Nov. 7, 2012. |
Non-Final Office Action, mailed Feb. 10, 2015, U.S. Appl. No. 13/789,565, filed Mar. 7, 2013. |
Final Office Action, mailed Apr. 1, 2015, U.S. Appl. No. 14/037,231, filed Sep. 25, 2013. |
Li et al., “Efficient File Replication,” U.S. Appl. No. 13/671,498, filed Nov. 7, 2012. |
Notice of Allowance, mailed Sep. 8, 2015, U.S. Appl. No. 14/037,231, filed Sep. 25, 2013. |
Notice of Allowance of U.S. Appl. No. 13/789,578, filed Mar. 7, 2013, mailed Oct. 21, 2015. |
Non-Final Office Action for U.S. Appl. No. 13/789,565, filed Mar. 7, 2013, mailed Oct. 30, 2015. |