Forming a protection domain in a storage architecture

Information

  • Patent Grant
  • 9678680
  • Patent Number
    9,678,680
  • Date Filed
    Monday, March 30, 2015
    10 years ago
  • Date Issued
    Tuesday, June 13, 2017
    7 years ago
Abstract
In one aspect, a method includes generating a plurality of protection domains of software-defined storage, generating a volume in each protection domain and exposing the volumes as devices in a storage architecture which generates a RAID protection over the exposed volumes. In another aspect, an apparatus includes electronic hardware circuitry configured to generate a plurality of protection domains of software-defined storage, generate a volume in each protection domain and expose the volumes as devices in a storage architecture which generates a RAID protection over the exposed volumes. In a further aspect, an article includes a non-transitory computer-readable medium that stores computer-executable instructions. The instructions cause a machine to generate a plurality of protection domains of software-defined storage, generate a volume in each protection domain and expose the volumes as devices in a storage architecture which generates a RAID protection over the exposed volumes.
Description
BACKGROUND

As usage of computers and computer related services increases, storage requirements for enterprises and Internet related infrastructure companies are exploding at an unprecedented rate. Enterprise applications, both at the corporate and departmental level, are causing this huge growth in storage requirements. Recent user surveys indicate that the average enterprise has been experiencing a 52% growth rate per year in storage. In addition, over 25% of the enterprises experienced more than 50% growth per year in storage needs, with some enterprises registering as much as 500% growth in storage requirements.


Today, several approaches exist for networked storage, including hardware-based systems. These architectures work well but are generally expensive to acquire, maintain, and manage, thus limiting their use to larger businesses. Small and mid-sized businesses might not have the resources, including money and expertise, to utilize the available scalable storage solutions.


SUMMARY

In one aspect, a method includes generating a plurality of protection domains of software-defined storage, generating a volume in each protection domain and exposing the volumes as devices in a storage architecture which generates a RAID protection over the exposed volumes. In another aspect, an apparatus includes electronic hardware circuitry configured to generate a plurality of protection domains of software-defined storage, generate a volume in each protection domain and expose the volumes as devices in a storage architecture which generates a RAID protection over the exposed volumes. In a further aspect, an article includes a non-transitory computer-readable medium that stores computer-executable instructions. The instructions cause a machine to generate a plurality of protection domains of software-defined storage, generate a volume in each protection domain and expose the volumes as devices in a storage architecture which generates a RAID protection over the exposed volumes.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1A is a block diagram of an example of a system to manage a distributed storage space.



FIG. 1B is a block diagram of a relationship between a logical unit and data servers.



FIG. 1C is a block diagram of a system with a host and storage network.



FIG. 2 is a block diagram of an example of a virtual data domain using a distributed storage system.



FIG. 3 is a block diagram of an example of first configuration of the virtual data domain.



FIG. 4 is a block diagram of an example of the first configuration of the virtual data domain after failure of a director.



FIG. 5 is a block diagram of an example of the first configuration of the virtual data domain with additional disks added.



FIGS. 6A and 6B are a block diagram of an example of a second configuration of the virtual data domain.



FIG. 7 is a flowchart of an example of a process to form a scale out architecture.



FIG. 8 is a computer on which any of the processes of FIG. 7 may be implemented.





DETAILED DESCRIPTION

Described herein are techniques to form a protection domain in a storage architecture.


Referring to FIG. 1, a system 100 to manage a distributed storage space includes a host 102. The host 102 includes a data client 108, a data server 110, application(s) 122, a file system 124, a volume manager 126, block device driver 128 and a network interface card (NIC) 134. Communications between the application(s) 122 and the file system 124 use file-system semantics.


Communications between the file system 124, the volume manager 126, the volume manager 126, the block device drivers 128, the DAS 130 and the HBAs 132 use block semantics. The data client 108 is a block device driver that exposes shared block volumes to the application 122. The data client 108 serves the I/O request of the resident host applications 122. The data server 110 is a daemon/service that owns local storage (e.g., DAS 130) that contributes to the storage pool. The data server 110 serves the I/O requests of various data clients 108.


Referring to FIG. 1B, a software-defined storage layer can expose logical units (LUs) or devices, where each device is spread across all the storage devices in all the storage servers in the relevant protection domain. For example, each data server 110a-110d is responsible for handling a portion of a logical unit 180. For example, a portion A 182a of the logical unit 180 is handled by the data server 110a, a portion B 182b of the logical unit 180 is handled by the data server 110b, a portion C 182c of the logical unit 180 is handled by the data server 110c and a portion D 182d of the logical unit 180 is handled by the data server 110d. A portion of the logical unit includes one or more data blocks. In one example, a data block may be 4 kb or 8 kb. In another example, a data block is any size designated by a user. Each data server 110a-110d is responsible for writing data in their respective portion 182a-182d of the logical unit 180 to their respective block storage device.


Referring to FIG. 1C, a system 100′ includes a host 102′, connected to an external storage subsystem 160 of disks 162 by a fabric 140. The fabric 140 is connected to the external storage subsystem 160 through host bus adapters (HBAs) 150. The fabric 140 includes switches (e.g., switches 142a-142c). The host 102′ includes application(s) 122, a file system 124, a volume manager 126, block device driver 128, and host bus adapters (HBAs) 132 to communicate to the fabric 140.


As will be further described herein the systems 100, 100′ represent storage architectures that may be used in protection domains.


Referring to FIG. 2, an example of scale out architecture is a scale out architecture 200. The architecture 200 includes a scale out storage system with protection domains (e.g., EMC® SCALEIO®) with a data domain virtual appliance installed over it. A protection domain is a virtual storage array (volumes) formed on a set of storage devices. Each protection domain has its own failure model and failure of one protection domain will not cause failure in another protection domain. In this embodiment the protection domains do not mirror the I/Os, so that a failure of one node or one disk will cause the loss of a complete protection domain (typically in a software defined storage all devices are mirrored so a failure of a single device does not imply loss of access to the storage in this case the devices are not mirrored).


In the example in FIG. 2, there are eight protection domains (250a-250h) formed over storage devices (e.g., a set of the storage devices 160 such the device 162). A volume from each protection domain is configured. For example, the protection domain 250a exposes volume 1, the protection domain 250b exposes a volume 2, the protection domain 250c exposes volume 3, . . . , the protection domain 250h exposes volume 8. A data domain virtual appliance is configured to consume the 8 virtual volumes and treat each volume as a separate disk drive (and also the volumes are striped across multiple devices).


The data domain instance uses RAID 6 over the volumes. In the example in FIG. 2, RAID 6 is formed over eight volumes (6+2). Thus, in such a deployment there is double protection (due to the RAID 6) using less storage, i.e., with no mirroring availability is achieved by the RAID at an upper storage layer.


If a regular deployment of the scale out architecture (e.g., EMC® SCALEIO® version) is used (i.e., each protection domain also has mirroring between its volumes), the system 300 will protect against up to five failures. The configuration in FIG. 2 is deployable in a hyper-converged infrastructure, where the amount of nodes and devices is relatively large.


Multiple data domain instances can be deployed on the same set of protection domains, thus giving multi tenancy and scale out architecture. If a single namespace file system is implemented in the data domain, then this architecture can be used for a single huge scale data domain system.


Referring to FIG. 3, a first configuration 300 of the scale out architecture 200 includes a director 202a, a director 202b and storage disks (e.g., storage disks 220a-220h, 222a-222h, 224a-224h), which are dual ported (i.e., both directors 202a, 202b can access the storage devices). The director 202a includes a data domain instance 204 (e.g., using Raid 6+2) over volumes 206a-206h, a data client 208 and data servers 210a-210d. The director 202b includes data servers 210e-210h. In this configuration, a protection domain 250a is formed for the volume 206a and includes data server 210a and devices 220a, 222a, 224a; a protection domain 250b is formed for the volume 206b and includes data server 210b and devices 220b, 222b, 224b; . . . , and a protection domain 250h is formed for the volume 206h and includes data server 210h and devices 220h, 222h, 224h.


Referring to FIG. 4, in the configuration 300, if one of the directors fails the data servers will immediately start running on the second director, since the disks are dual ported the access to the disks is not lost. For example, as shown on FIG. 4, the director 202a has failed and the data servers 210a-210e start running on the director 202b and thus the virtual data domain can continue to run on the second director.


Referring to FIG. 5, in the configuration 300, adding more devices may be done by adding a disk in each protection domain and data is automatically re-spread over all the devices. The re-spreading of the data is done by the scale out architecture (e.g., EMC® SCALEIO®) software-defined storage and there is no awareness of the process at the layer of the data domain. For example, each of disks 226a-226h are added to a respective domain 250a-250a. For example, disk 226a is added to the protection domain 250a, disk 226b is added to the protection domain 250b, . . . , and disk 226h is added to the protection domain 250h.


Referring to FIGS. 6A and 6B, the architecture 300 can also be scaled out by adding more directors. For example, in an architecture 400, directors 202a′, 202b′ are added. The director 202a′ is similar as director 202a and the director 202b′ is the similar as director 202b. That is, director 202a′ includes data server 210a-210d for protection domains 250a-250d but over new devices 222a′-222d′, 224a′-224d′, 226a′-226d′ and director 202b′ includes data server 210e-210h for protection domains 250e-250h but over new devices 222e′-222h′, 224e′-224h′, 226e′-226h′. That is, the data is automatically spread by the software-defined storage layer across the new devices added to each protection domain. The data domain layer is not aware of this process. Multiple instances of data domain can run, for example. An instance or more of a data domain can run on each director. Each instance may use different LUs or volumes exposed by the same eight protection domains 250a-250h.


Referring to FIG. 7, a process 700 is an example of a process to form a scale out architecture, for example, as shown in FIGS. 2 to 6. Process 700 forms a data domain over a plurality of volumes using RAID protection (704) and forms a protection domain for each volume (708). Each protection domain includes a data server and a plurality of disks and there is an equal number of disks in each data protection domain.


Referring to FIG. 8, in one example, a computer 800 includes a processor 802, a volatile memory 804, a non-volatile memory 806 (e.g., hard disk) and the user interface (UI) 808 (e.g., a graphical user interface, a mouse, a keyboard, a display, touch screen and so forth). The non-volatile memory 806 stores computer instructions 812, an operating system 816 and data 818. In one example, the computer instructions 812 are executed by the processor 802 out of volatile memory 804 to perform all or part of the processes described herein (e.g., process 700).


The processes described herein (e.g., process 700) are not limited to use with the hardware and software of FIG. 8; they may find applicability in any computing or processing environment and with any type of machine or set of machines that is capable of running a computer program. The processes described herein may be implemented in hardware, software, or a combination of the two. The processes described herein may be implemented in computer programs executed on programmable computers/machines that each includes a processor, a non-transitory machine-readable medium or other article of manufacture that is readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and one or more output devices. Program code may be applied to data entered using an input device to perform any of the processes described herein and to generate output information.


The system may be implemented, at least in part, via a computer program product, (e.g., in a non-transitory machine-readable storage medium such as, for example, a non-transitory computer-readable medium), for execution by, or to control the operation of, data processing apparatus (e.g., a programmable processor, a computer, or multiple computers)). Each such program may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. However, the programs may be implemented in assembly or machine language. The language may be a compiled or an interpreted language and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network. A computer program may be stored on a non-transitory machine-readable medium that is readable by a general or special purpose programmable computer for configuring and operating the computer when the non-transitory machine-readable medium is read by the computer to perform the processes described herein. For example, the processes described herein may also be implemented as a non-transitory machine-readable storage medium, configured with a computer program, where upon execution, instructions in the computer program cause the computer to operate in accordance with the processes. A non-transitory machine-readable medium may include but is not limited to a hard drive, compact disc, flash memory, non-volatile memory, volatile memory, magnetic diskette and so forth but does not include a transitory signal per se.


The processes described herein are not limited to the specific examples described. For example, the process 700 is not limited to the specific processing order of FIG. 7. Rather, any of the processing blocks of FIG. 7 may be re-ordered, combined or removed, performed in parallel or in serial, as necessary, to achieve the results set forth above.


The processing blocks (for example, in the process 700) associated with implementing the system may be performed by one or more programmable processors executing one or more computer programs to perform the functions of the system. All or part of the system may be implemented as, special purpose logic circuitry (e.g., an FPGA (field-programmable gate array) and/or an ASIC (application-specific integrated circuit)). All or part of the system may be implemented using electronic hardware circuitry that include electronic devices such as, for example, at least one of a processor, a memory, a programmable logic device or a logic gate.


Elements of different embodiments described herein may be combined to form other embodiments not specifically set forth above. Various elements, which are described in the context of a single embodiment, may also be provided separately or in any suitable subcombination. Other embodiments not specifically described herein are also within the scope of the following claims.

Claims
  • 1. A method comprising: generating a plurality of protection domains of software-defined storage, each protection domain comprising a plurality of storage disks having a first port and a second port, each protection domain having an equal number of storage disks;connecting a first director to the first port of each storage disk;connecting a second director to the second port of each storage disk;running a first plurality of data servers on the first director;running a second plurality of data servers on the second director, each one of the first and second plurality of data servers assigned to a respective one protection domain;for each protection domain, generating, on the first director, a virtual volume assigned to the respective protection domain; andexposing each virtual volume as a separate disk drive in a storage architecture which generates, on the first director, a RAID protection over the exposed virtual volumes as a data domain instance, wherein the exposed virtual volumes are protected together under the same RAID protection level;running the first plurality of data servers on the second director in response to a failure of the first director; andrunning the data domain instance on the second director in response to the failure.
  • 2. The method of claim 1, further comprising adding storage disks by adding an equal number of storage disks to each data protection domain, wherein data is spread over the storage disks.
  • 3. The method of claim 1, further comprising removing storage disks by removing an equal number of storage disks from each data protection domain, wherein data is spread over the storage disks.
  • 4. The method of claim 1, wherein the software-defined storage is mirroring data.
  • 5. The method of claim 1, wherein the software-defined storage is not mirroring data and availability is achieved by the RAID at an upper storage layer.
  • 6. An apparatus, comprising: electronic hardware circuitry configured to: generate a plurality of protection domains of software-defined storage, each protection domain comprising a plurality of storage disks having a first port and a second port, each protection domain having an equal number of storage disks;connect a first director to the first port of each storage disk;connect a second director to the second port of each storage disk;run a first plurality of data servers on the first director;run a second plurality of data servers on the second director, each one of the first and second plurality of data servers assigned to a respective one protection domain;for each protection domain, generate, on the first director, a virtual volume assigned to the respective protection domain; andexpose each virtual volume as a separate disk drive in a storage architecture which generates, on the first director, a RAID protection over the exposed virtual volumes as a data domain instance, wherein the exposed virtual volumes are protected together under the same RAID protection level;run the first plurality of data servers on the second director in response to a failure of the first director; andrun the data domain instance on the second director in response to the failure.
  • 7. The apparatus of claim 6, wherein the circuitry comprises at least one of a processor, a memory, a programmable logic device or a logic gate.
  • 8. The apparatus of claim 7, further comprising circuitry configured to add an equal number of storage disks to each data protection domain, wherein data is spread over the storage disks.
  • 9. The apparatus of claim 7, further comprising circuitry configured to remove storage disks by removing an equal number of storage disks from each data protection domain, wherein data is spread over the storage disks.
  • 10. The apparatus of claim 6, wherein the software-defined storage is mirroring data.
  • 11. The apparatus of claim 6, wherein the software-defined storage is not mirroring data and availability is achieved by the RAID at an upper storage layer.
  • 12. An article comprising: a non-transitory computer-readable medium that stores computer-executable instructions, the instructions causing a machine to: generate a plurality of protection domains of software-defined storage, each protection domain comprising a plurality of storage disks having a first port and a second port, each protection domain having an equal number of storage disks;connect a first director to the first port of each storage disk;connect a second director to the second port of each storage disk;run a first plurality of data servers on the first director;run a second plurality of data servers on the second director, each one of the first and second plurality of data servers assigned to a respective one protection domain;for each protection domain, generate, on the first director, a virtual volume assigned to the respective protection domain; andexpose each virtual volume as a separate disk drive in a storage architecture which generates, on the first director, a RAID protection over the exposed virtual volumes as a data domain instance, wherein the exposed virtual volumes are protected together under the same RAID protection level;run the first plurality of data servers on the second director in response to a failure of the first director; andrun the data domain instance on the second director in response to the failure.
  • 13. The article of claim 12, further comprising instructions causing the machine to add an equal number of storage disks to each data protection domain, wherein data is spread over the storage disks.
  • 14. The article of claim 12, further comprising instructions causing the machine to remove storage disks by removing an equal number of storage disks from each data protection domain, wherein data is spread over the storage disks.
  • 15. The article of claim 12, wherein the software-defined storage is mirroring data.
  • 16. The article of claim 12, wherein the software-defined storage is not mirroring data and availability is achieved by the RAID at an upper storage layer.
  • 17. The method of claim 1, wherein the exposed volumes are striped across multiple storage devices within a respective domain.
  • 18. The apparatus of claim 6, wherein the exposed volumes are striped across multiple storage devices within a respective domain.
  • 19. The article of claim 12, wherein the exposed volumes are striped across multiple storage devices within a respective domain.
US Referenced Citations (285)
Number Name Date Kind
5170480 Mohan et al. Dec 1992 A
5249053 Jain Sep 1993 A
5388254 Betz et al. Feb 1995 A
5499367 Bamford et al. Mar 1996 A
5526397 Lohman Jun 1996 A
5819104 Tuccio Oct 1998 A
5864837 Maimone Jan 1999 A
5879459 Gadgil et al. Mar 1999 A
5990899 Whitten Nov 1999 A
6042652 Hyun et al. Mar 2000 A
6061274 Thibault May 2000 A
6065018 Beier et al. May 2000 A
6143659 Leem Nov 2000 A
6148340 Bittinger et al. Nov 2000 A
6174377 Doering et al. Jan 2001 B1
6174809 Kang et al. Jan 2001 B1
6203613 Gates et al. Mar 2001 B1
6219753 Richardson Apr 2001 B1
6260125 McDowell Jul 2001 B1
6270572 Kim et al. Aug 2001 B1
6272534 Guha Aug 2001 B1
6287965 Kang et al. Sep 2001 B1
6467023 DeKoning et al. Oct 2002 B1
6567890 Mulvey May 2003 B1
6571354 Parks May 2003 B1
6571355 Linnell May 2003 B1
6574657 Dickinson Jun 2003 B1
6574687 Teachout Jun 2003 B1
6581136 Tuccio Jun 2003 B1
6598174 Parks Jul 2003 B1
6621493 Whitten Sep 2003 B1
6636934 Linnell Oct 2003 B1
6804676 Bains, II Oct 2004 B1
6947981 Lubbers et al. Sep 2005 B2
7043610 Horn et al. May 2006 B2
7051126 Franklin May 2006 B1
7076620 Takeda et al. Jul 2006 B2
7103716 Nanda Sep 2006 B1
7111197 Kingsbury et al. Sep 2006 B2
7117327 Hirakawa et al. Oct 2006 B2
7120768 Mizuno et al. Oct 2006 B2
7130975 Suishu et al. Oct 2006 B2
7139927 Park et al. Nov 2006 B2
7159088 Hirakawa et al. Jan 2007 B2
7167963 Hirakawa et al. Jan 2007 B2
7203741 Marco et al. Apr 2007 B2
7222136 Brown et al. May 2007 B1
7296008 Passerini et al. Nov 2007 B2
7308532 Wood Dec 2007 B1
7328373 Kawamura et al. Feb 2008 B2
7353335 Kawamura Apr 2008 B2
7360113 Anderson et al. Apr 2008 B2
7426618 Vu et al. Sep 2008 B2
7516287 Ahal et al. Apr 2009 B2
7519625 Honami et al. Apr 2009 B2
7519628 Leverett Apr 2009 B1
7546485 Cochran et al. Jun 2009 B2
7577867 Lewin et al. Aug 2009 B2
7590887 Kano Sep 2009 B2
7606940 Yamagami Oct 2009 B2
7627612 Ahal et al. Dec 2009 B2
7627687 Ahal et al. Dec 2009 B2
7631143 Niver Dec 2009 B1
7719443 Natanzon May 2010 B1
7757057 Sangapu et al. Jul 2010 B2
7774565 Lewin et al. Aug 2010 B2
7797358 Ahal et al. Sep 2010 B1
7840536 Ahal et al. Nov 2010 B1
7840662 Natanzon Nov 2010 B1
7844856 Ahal et al. Nov 2010 B1
7849361 Ahal et al. Dec 2010 B2
7860836 Natanzon et al. Dec 2010 B1
7865677 Duprey Jan 2011 B1
7882286 Natanzon et al. Feb 2011 B1
7934262 Natanzon et al. Apr 2011 B1
7958372 Natanzon Jun 2011 B1
8037162 Marco et al. Oct 2011 B2
8041940 Natanzon et al. Oct 2011 B1
8060713 Natanzon Nov 2011 B1
8060714 Natanzon Nov 2011 B1
8103937 Natanzon et al. Jan 2012 B1
8108634 Natanzon et al. Jan 2012 B1
8156281 Grosner Apr 2012 B1
8205009 Hellen et al. Jun 2012 B2
8214612 Natanzon Jul 2012 B1
8250149 Marco et al. Aug 2012 B2
8271441 Natanzon et al. Sep 2012 B1
8271447 Natanzon et al. Sep 2012 B1
8332687 Natanzon et al. Dec 2012 B1
8335761 Natanzon Dec 2012 B1
8335771 Natanzon et al. Dec 2012 B1
8341115 Natanzon et al. Dec 2012 B1
8370648 Natanzon Feb 2013 B1
8380885 Natanzon Feb 2013 B1
8392680 Natanzon et al. Mar 2013 B1
8429362 Natanzon et al. Apr 2013 B1
8433869 Natanzon et al. Apr 2013 B1
8438135 Natanzon et al. May 2013 B1
8464101 Natanzon et al. Jun 2013 B1
8478955 Natanzon et al. Jul 2013 B1
8495304 Natanzon et al. Jul 2013 B1
8498417 Harwood Jul 2013 B1
8510279 Natanzon et al. Aug 2013 B1
8521691 Natanzon Aug 2013 B1
8521694 Natanzon Aug 2013 B1
8543609 Natanzon Sep 2013 B1
8583885 Natanzon Nov 2013 B1
8588425 Harwood Nov 2013 B1
8600945 Natanzon et al. Dec 2013 B1
8601085 Ives et al. Dec 2013 B1
8627012 Derbeko et al. Jan 2014 B1
8683592 Dotan et al. Mar 2014 B1
8694700 Natanzon et al. Apr 2014 B1
8706700 Natanzon et al. Apr 2014 B1
8712962 Natanzon et al. Apr 2014 B1
8719497 Don et al. May 2014 B1
8725691 Natanzon May 2014 B1
8725692 Natanzon et al. May 2014 B1
8726066 Natanzon et al. May 2014 B1
8738813 Natanzon et al. May 2014 B1
8745004 Natanzon et al. Jun 2014 B1
8751828 Raizen et al. Jun 2014 B1
8769336 Natanzon et al. Jul 2014 B1
8799681 Linnell Aug 2014 B1
8805786 Natanzon Aug 2014 B1
8806161 Natanzon Aug 2014 B1
8825848 Dotan et al. Sep 2014 B1
8832399 Natanzon et al. Sep 2014 B1
8850143 Natanzon Sep 2014 B1
8850144 Natanzon et al. Sep 2014 B1
8862546 Natanzon et al. Oct 2014 B1
8892835 Natanzon et al. Nov 2014 B1
8898112 Natanzon et al. Nov 2014 B1
8898409 Natanzon et al. Nov 2014 B1
8898515 Natanzon Nov 2014 B1
8898519 Natanzon et al. Nov 2014 B1
8914595 Natanzon Dec 2014 B1
8924668 Natanzon Dec 2014 B1
8930500 Marco et al. Jan 2015 B2
8930947 Derbeko et al. Jan 2015 B1
8935498 Natanzon Jan 2015 B1
8949180 Natanzon et al. Feb 2015 B1
8954673 Natanzon et al. Feb 2015 B1
8954796 Cohen et al. Feb 2015 B1
8959054 Natanzon Feb 2015 B1
8977593 Natanzon et al. Mar 2015 B1
8977826 Meiri et al. Mar 2015 B1
8996460 Frank et al. Mar 2015 B1
8996461 Natanzon et al. Mar 2015 B1
8996827 Natanzon Mar 2015 B1
9003138 Natanzon et al. Apr 2015 B1
9026696 Natanzon et al. May 2015 B1
9031913 Natanzon May 2015 B1
9032160 Natanzon et al. May 2015 B1
9037818 Natanzon et al. May 2015 B1
9063994 Natanzon et al. Jun 2015 B1
9069479 Natanzon Jun 2015 B1
9069709 Natanzon et al. Jun 2015 B1
9081754 Natanzon et al. Jul 2015 B1
9081842 Natanzon et al. Jul 2015 B1
9087008 Natanzon Jul 2015 B1
9087112 Natanzon et al. Jul 2015 B1
9104529 Derbeko et al. Aug 2015 B1
9110914 Frank et al. Aug 2015 B1
9116811 Derbeko et al. Aug 2015 B1
9128628 Natanzon et al. Sep 2015 B1
9128855 Natanzon et al. Sep 2015 B1
9134914 Derbeko et al. Sep 2015 B1
9135119 Natanzon et al. Sep 2015 B1
9135120 Natanzon Sep 2015 B1
9146878 Cohen et al. Sep 2015 B1
9152339 Cohen et al. Oct 2015 B1
9152578 Saad et al. Oct 2015 B1
9152814 Natanzon Oct 2015 B1
9158578 Derbeko et al. Oct 2015 B1
9158630 Natanzon Oct 2015 B1
9160526 Raizen et al. Oct 2015 B1
9177670 Derbeko et al. Nov 2015 B1
9189339 Cohen et al. Nov 2015 B1
9189341 Natanzon et al. Nov 2015 B1
9201736 Moore et al. Dec 2015 B1
9223659 Natanzon et al. Dec 2015 B1
9225529 Natanzon et al. Dec 2015 B1
9235481 Natanzon et al. Jan 2016 B1
9235524 Derbeko et al. Jan 2016 B1
9235632 Natanzon Jan 2016 B1
9244997 Natanzon et al. Jan 2016 B1
9256605 Natanzon Feb 2016 B1
9274718 Natanzon et al. Mar 2016 B1
9275063 Natanzon Mar 2016 B1
9286052 Solan et al. Mar 2016 B1
9305009 Bono et al. Apr 2016 B1
9323750 Natanzon et al. Apr 2016 B2
9330155 Bono et al. May 2016 B1
9330156 Satapathy May 2016 B2
9336094 Wolfson et al. May 2016 B1
9336230 Natanzon May 2016 B1
9367260 Natanzon Jun 2016 B1
9378096 Erel et al. Jun 2016 B1
9378219 Bono et al. Jun 2016 B1
9378261 Bono et al. Jun 2016 B1
9383937 Frank et al. Jul 2016 B1
9389800 Natanzon et al. Jul 2016 B1
9405481 Cohen et al. Aug 2016 B1
9405684 Derbeko et al. Aug 2016 B1
9405765 Natanzon Aug 2016 B1
9411535 Shemer et al. Aug 2016 B1
9459804 Natanzon et al. Oct 2016 B1
9460028 Raizen et al. Oct 2016 B1
9471579 Natanzon Oct 2016 B1
9477407 Marshak et al. Oct 2016 B1
9501542 Natanzon Nov 2016 B1
9507732 Natanzon et al. Nov 2016 B1
9507845 Natanzon et al. Nov 2016 B1
9514138 Natanzon et al. Dec 2016 B1
9524218 Veprinsky et al. Dec 2016 B1
9529885 Natanzon et al. Dec 2016 B1
9535800 Natanzon et al. Jan 2017 B1
9535801 Natanzon et al. Jan 2017 B1
9547459 Benhanokh et al. Jan 2017 B1
9547591 Natanzon et al. Jan 2017 B1
9552405 Moore et al. Jan 2017 B1
9557921 Cohen et al. Jan 2017 B1
9557925 Natanzon Jan 2017 B1
9563517 Natanzon et al. Feb 2017 B1
9563684 Natanzon et al. Feb 2017 B1
9575851 Natanzon et al. Feb 2017 B1
9575857 Natanzon Feb 2017 B1
9575894 Natanzon et al. Feb 2017 B1
9582382 Natanzon et al. Feb 2017 B1
9588703 Natanzon et al. Mar 2017 B1
9588847 Natanzon et al. Mar 2017 B1
20020129168 Kanai et al. Sep 2002 A1
20020194294 Blumenau Dec 2002 A1
20030048842 Fourquin et al. Mar 2003 A1
20030061537 Cha et al. Mar 2003 A1
20030110278 Anderson Jun 2003 A1
20030145317 Chamberlain Jul 2003 A1
20030196147 Hirata et al. Oct 2003 A1
20040205092 Longo et al. Oct 2004 A1
20040250032 Ji et al. Dec 2004 A1
20040254964 Kodama et al. Dec 2004 A1
20050015663 Armangau et al. Jan 2005 A1
20050028022 Amano Feb 2005 A1
20050049924 DeBettencourt et al. Mar 2005 A1
20050166083 Frey, Jr. Jul 2005 A1
20050172092 Lam et al. Aug 2005 A1
20050273655 Chow et al. Dec 2005 A1
20060031647 Hirakawa et al. Feb 2006 A1
20060047996 Anderson et al. Mar 2006 A1
20060064416 Sim-Tang Mar 2006 A1
20060107007 Hirakawa et al. May 2006 A1
20060112219 Chawla May 2006 A1
20060117211 Matsunami et al. Jun 2006 A1
20060161810 Bao Jul 2006 A1
20060179343 Kitamura Aug 2006 A1
20060195670 Iwamura et al. Aug 2006 A1
20060212462 Hellen et al. Sep 2006 A1
20070055833 Vu et al. Mar 2007 A1
20070162513 Lewin et al. Jul 2007 A1
20070180304 Kano Aug 2007 A1
20070198602 Ngo et al. Aug 2007 A1
20070198791 Iwamura et al. Aug 2007 A1
20070220311 Lewin et al. Sep 2007 A1
20070266053 Ahal et al. Nov 2007 A1
20080082591 Ahal et al. Apr 2008 A1
20080082592 Ahal et al. Apr 2008 A1
20080082770 Ahal et al. Apr 2008 A1
20090276566 Coatney Nov 2009 A1
20100030960 Kamalavannan Feb 2010 A1
20100083040 Voigt Apr 2010 A1
20110191595 Damian Aug 2011 A1
20140082128 Beard Mar 2014 A1
20140082129 Beard Mar 2014 A1
20140082288 Beard Mar 2014 A1
20140082295 Beard Mar 2014 A1
20140201425 Clark Jul 2014 A1
20150019807 Malkin Jan 2015 A1
20150058291 Earl Feb 2015 A1
20150058298 Earl Feb 2015 A1
20150058306 Earl Feb 2015 A1
20150058475 Earl Feb 2015 A1
20150058487 Karamanolis Feb 2015 A1
20150058577 Earl Feb 2015 A1
20150058863 Karamanolis Feb 2015 A1
Foreign Referenced Citations (2)
Number Date Country
1154356 Nov 2001 EP
WO 00 45581 Aug 2000 WO
Non-Patent Literature Citations (22)
Entry
Gibson, “Five Point Plan Lies at the Heart of Compression Technology;” Apr. 29, 1991; 1 Page.
Soules, “Metadata Efficiency in Versioning File Systems;” 2003; 16 Pages.
AIX System Management Concepts: Operating Systems and Devices; May 2000; 280 Pages.
Soules et al.; “Metadata Efficiency in a Comprehensive Versioning File System;” May 2002; CMU-CS-02-145; School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213; 33 Pages.
Linux Filesystems; Sams Publishing; 2002; 12 Pages.
Bunyan, “Multiplexing in a BrightStor® ARCserve® Backup Release 11;” Mar. 2004; 4 Pages.
Marks, “Network Computing;” Feb. 2, 2006; 8 Pages.
Hill, “Network Computing;” Jun. 8, 2006; 9 Pages.
Microsoft Computer Dictionary; 2002; Press Fifth Edition; 3 Pages.
Retrieved from http://en.wikipedia.org/wiki/DEFLATE; Deflate; Jun. 19, 2008; 6 Pages.
Retrieved from http://en.wikipedia.org/wiki/Huffman—coding; Huffman Coding; Jun. 8, 2008; 11 Pages.
Retrieved from http:///en.wikipedia.org/wiki/LZ77; LZ77 and LZ78; Jun. 17, 2008; 2 Pages.
U.S. Appl. No. 11/609,560 downloaded Feb. 23, 2015; 265 Pages.
U.S. Appl. No. 12/057,652 downloaded Feb. 23, 2015; 296 Pages.
U.S. Appl. No. 11/609,561 downloaded Feb. 23, 2015; 219 Pages.
U.S. Appl. No. 11/356,920 downloaded Feb. 23, 2015; 272 Pages.
U.S. Appl. No. 10/512,687 downloaded Feb. 23, 2015; 300 Pages.
U.S. Appl. No. 10/512,687 downloaded Feb. 23, 2015; 254 Pages.
U.S. Appl. No. 11/536,233 downloaded Feb. 23, 2015; 256 Pages.
U.S. Appl. No. 11/536,215 downloaded Feb. 23, 2015; 172 Pages.
U.S. Appl. No. 11/536,160 downloaded Feb. 23, 2015; 230 Pages.
U.S. Appl. No. 11/964,168 downloaded Feb. 23, 2015; 222 Pages.