1. Field of the Invention
The present invention described herein relates generally to storage systems, and in particular to those systems using data deduplication methods to reduce storage utilization.
2. Description of the Related Art
A common goal of most storage systems is to reduce the storage of duplicate data. One technique used to attain this goal is referred to as “deduplication”. Deduplication is a process whereby redundant copies of the same file or file segments within a storage system are deleted. In this manner, a single instance of a given file segment (or other portion of data) is maintained. Such an approach is often referred to as single instance storage.
The advantage of deduplication is simple; it reduces storage consumption by only storing unique data. In a typical storage system, it is common to find duplicate occurrences of individual blocks of data in separate locations. Duplication of data blocks may occur when, for example, two or more files share common data or where a given set of data occurs at multiple places within an individual file. With the use of deduplication, however, only a single copy of a file segment is written to the storage medium, thereby reducing memory consumption.
The process of data deduplication is often utilized in backup storage applications. Backup applications generally benefit the most from deduplication due to the requirement for recurrent backups of an existing file system. Typically, most of the files within the file system will not change between consecutive backups, and therefore do not need to be stored.
When a backup data stream is received by a storage system, the data stream is generally partitioned into segments. After partitioning, a fingerprint or other unique identifier is generated from each data segment. The fingerprint of each new data segment is compared to an index of fingerprints created from previously stored data segments. If a match between fingerprints is found, then the newly received data segment may be identical to one already stored in the storage system (i.e., represents redundant data). Therefore, rather than storing the new data segment, this data segment is discarded and a reference pointer is inserted in its place which identifies the location of the identical data segment in the backup data storage system. On the other hand, if the fingerprint does not have a match in the index, then the new data segment is not already stored in the storage system. Therefore, the new fingerprint is added to the index, and the new data segment is stored in the backup storage system.
In a typical deduplication based storage system, the input data stream is partitioned into fixed size segments following the exact sequence of the contiguous data stream. One drawback of this approach is that it fails to eliminate many redundant segments if the alignment between consecutive backup data streams is slightly different. For example, as noted above, when a single machine performs a backup of a given storage system snapshot, most of the data being sent to the backup storage medium will be unchanged from the previous snapshot. However, any individual file modification or deletion within the snapshot image may shift segment boundaries and result in the creation of a totally different set of segments. Consequently, many segments for a given file will not be identical to previous segments for the file—even though most of the data for the file remains unchanged.
Another drawback with current approaches to deduplication is the strain put on system resources from managing a large number of stored segments and managing the deduplication process. If maximizing the deduplication ratio were the only goal, then choosing a smaller segment size to partition the backup data stream may achieve this goal. However, with a smaller segment size, the number of segments and the fingerprint index may grow too large to be easily managed. For example, as the size of the fingerprint index grows, eventually the index may exceed the size of the available physical memory. When this happens, portions of the index must be stored on disk, resulting in a slowdown in reads and writes to the index, and causing overall sluggish performance of the deduplication process. Additionally, when portions of the index are stored on disk, the task of searching for a fingerprint match will often be the bottleneck delaying the deduplication process. Ideally, the entire fingerprint index is stored in physical memory, and to accomplish this, additional techniques are needed to keep the size of the index relatively small while still achieving a high deduplication ratio. Also, generating fingerprints utilizes valuable processing resources. Thus, reducing the number of fingerprints generated may also decrease the burden on the processing and memory resources of the deduplication storage system.
In view of the above, improved methods and mechanisms for managing deduplication of data are desired.
Various embodiments of methods and mechanisms for managing deduplication of data are contemplated. In one embodiment, a data stream manager of a deduplication system is coupled to receive a data stream for storage in a storage system. In addition to receiving the data stream, metadata corresponding to the data stream is also received. The data stream manager analyzes the metadata for the data stream and makes decision regarding how the data stream is to be partitioned based upon the metadata. In various embodiments, the data stream manager partitions the backup data stream into variable sized segments; smaller segments may be used when there is a higher probability of deduplication, and larger segments may be used when there is a lower probability of deduplication.
Also contemplated is a data stream manager coupled to receive a backup data stream and corresponding metadata from a client. In one embodiment, the metadata describes attributes of the data contained within the backup data stream. Such attributes may include an indication as to a type of data included within the data stream. As the data stream manager processes the data stream, it partitions the data stream into segments of various sizes. The choice of segment size may be based at least in part on the type of data included within the data stream.
Also contemplated are embodiments wherein the metadata contains an extent mapping of the data stream. The data stream manager may use this extent mapping to locate file boundaries within the data stream. Other embodiments may utilize other data to identify file boundaries. The data stream manager may then partition the data stream into segments aligned with the file boundaries. Segments of variable sizes may be created in order to align subsequent segments with a file boundary.
These and other features and advantages will become apparent to those of ordinary skill in the art in view of the following detailed descriptions of the embodiments of the approaches presented herein.
The above and further advantages of the methods and mechanisms may be better understood by referring to the following description in conjunction with the accompanying drawings, in which:
In the following description, numerous specific details are set forth to provide a thorough understanding of the methods and mechanisms presented herein. However, one having ordinary skill in the art should recognize that the various embodiments may be practiced without these specific details. In some instances, well-known structures, signals, computer program instructions, and techniques have not been shown in detail to avoid obscuring the approaches described herein.
One or more of the clients on the primary network 130 may also function as a server to a network of other clients. The approaches described herein can be utilized in a variety of networks, including combinations of local area networks (LANs), such as Ethernet networks or Wi-Fi networks, and wide area networks (WANs), such as the Internet, cellular data networks, and other data communication networks. The networks served by the approaches described herein may also have a plurality of backup storage media, depending on the unique storage and backup requirements of each specific network. Storage media may be implemented in accordance with a variety of storage architectures including, but not limited to, a network-attached storage environment, a storage area network (SAN), and a disk assembly directly attached to a client or host computer.
It is also noted that while the following discussion will generally refer to backups and/or backup data, embodiments of the methods and mechanisms described herein may be utilized in association with data that is not part of a backup. For example, the approaches described herein may be used in conjunction with a live/working data store or otherwise.
The media server 120 may be used as part of an in-line deduplication system. An in-line deduplication system may also include clients 105 and 110 that are representative of any number of mobile or stationary clients. The media server 120 may receive the backup data streams from the clients 105 and 110. The data stream manager 150 on the media server 120 may partition the backup data streams into segments. In one embodiment, to avoid sending redundant data segments over the network 130, the data stream manager 150 may communicate with the deduplication engine 160 to identify the unique data segments. After querying the deduplication engine 160, the identified unique data segments may be sent to the deduplication engine 160 running on the backup server 125. From the backup server 125, the data segments may be sent to the backup storage 165.
The backup server 125 may also be used as part of a target deduplication system. A target deduplication system may also include clients 105 and 110. The backup server 125 may receive the backup data streams from the clients 105 and 110, and the data stream manager 155 may partition the backup data stream into segments. The data segments may then be processed by the deduplication engine 160. The deduplication engine 160 may remove redundant data segments before or after the data segments are sent to the backup storage 165.
Client 115 illustrates an embodiment of a client deduplication system. In this embodiment, both the data stream manager 145 and deduplication engine 140 may be present on the client 115. As shown, the client 115 is connected directly to a local backup storage 135. The data stream manager 145 may partition the backup data stream, generated by the client 115, into segments. In one embodiment, to avoid sending redundant data segments to the deduplication engine 140, the data stream manager 145 may communicate with the deduplication engine 140 to identify the unique data segments. After querying the deduplication engine 140, the identified unique data segments may then be sent to the deduplication engine 140 running on the client 115. From the client 115, the data segments may be sent to the backup storage 135.
Turning now to
In
In one embodiment, metadata associated with a data stream may indicate a type of data included in the corresponding data stream. In such an embodiment, the data stream manager 285 may determine the file type of the data in the data stream by analyzing the metadata associated with the data stream. In one embodiment, the metadata corresponding to a given data stream may be prepended to the data stream. For example, the front (first received portion) of the backup data stream may contain a file system map, and the metadata may be contained inside the map. In another embodiment, the metadata may be interspersed throughout the backup data stream. In another embodiment, the metadata may be sent after the backup data stream. In another embodiment, the data stream manager may process the backup data stream to generate the metadata. Some of the necessary metadata may be present, and so the data stream manager may process the backup data stream to generate additional metadata needed to determine how to partition the data stream. In a further embodiment, a data stream generator may generate the metadata, extent mapping, and data stream, prepend the metadata and extent mapping to the data stream, and then convey the entirety to the data stream manager. The data stream generator may be configured to generate, or otherwise obtain, the specific metadata attributes and extent mapping information required by the data stream manager to partition the data stream into segments.
The metadata may include information about the data within the backup data stream such as file type, format, name, size, extent mapping, extent group information, file permissions (e.g., Read/Write, Read-Only), modification time, access control list, access creation, or otherwise.
In various embodiments, the same segment size may be used for all data from a specific type of file, with the specific type of file determined from the file type metadata. If the file type metadata is unavailable, the file type may be ascertained by looking at the file name. For example, a file with the name “Readme.doc” may be classified as a word processing document by examining the “.doc” ending to the file name. There may be other metadata attributes that can be used to determine missing metadata attributes through similar semantic hints.
Using the same segment size to partition a specific file type may increase the likelihood of finding fingerprint matches among data segments of the same file type. For example, if data from a word processing document is always partitioned into 128 kilobyte (KB) segment sizes, then finding a match is more likely since each segment is the same size. However, if such documents are broken into a variety of segment sizes, like 32, 64, 128, and 256 KB segments, then the probability of finding a matching segment decreases. For example, if a new data segment is 128 KB, and there exists a matching 128 KB chunk of data in storage that is identical to the new segment except that it is part of a 256 KB data segment, then when the deduplication engine compares fingerprints generated from the two segments, it will conclude they are not identical.
Another possible embodiment of the approaches described herein is to use a large, variable sized segment for partitioning, based on the metadata attributes of file type and file size. For example, files of type “MP3” may not deduplicate as well as other file types, i.e., segments from MP3 files rarely find a match in the fingerprint index. Therefore, it may be beneficial to create a large segment size that encompasses all of the data in the MP3 file. With a single segment, only one fingerprint is generated, and the fingerprint index increases by just one. The alternative would be to use multiple segments to store the MP3 file, and then the segment count and the fingerprint index would grow by more than one without a significant increase in the probability of finding redundant data. As an alternative, rather than using a single segment for such data, a relatively small number of segments may be used. Otherwise, for MP3 files, encompassing the entire file into one large segment may be desired. In this case, the metadata used for choosing segment size may include the file type and file size. For example, if the size of the MP3 file is 3.8 megabytes (MB), then a segment size of 3.8 MB may be used. If the size of the MP3 file is 7.4 MB, then a segment size of 7.4 MB may be chosen, and so on.
In a further embodiment, an end-user of the approaches described herein may modify the segment size for a specific type of file. For example, the user may determine that a smaller segment size would allow for more deduplication for word processing documents stored on their system. Then the user could reduce the segment size used by the data stream manager to partition word processing documents. In addition, the user may define additional file types for their system that have not been defined by the data stream manager. Also, the user may choose the segment size used for partitioning the additional file type. The lookup table 300 of
As is shown in Method 2, 455, the backup data stream is broken into four segments of 16 KB size and one segment of 8 KB size. Segment 460 contains the first 16 KB of file A, segment 465 contains the next 16 KB of file A, segment 470 contains the last 8 KB of file A, segment 475 contains the first 16 KB of file B, and segment 480 contains the last 16 KB of file B.
Consider in this example how the segments would be deduplicated if the first 32 KB of file A 410 are equivalent to the 32 KB of file B 415. For Method 1, 425, no matching segments will be generated because the file boundary between file A and file B occurs in the middle of segment 440. So no deduplication will take place for this method. However, for Method 2, 455, two of the five segments can be discarded, because segment 460 is identical to segment 475, and segment 465 is identical to segment 480. Method 2, 455, recognizes the file boundary between A and B and stores the last 8 KB of file A in an 8 KB segment 470 so that it can store the first 16 KB of file B in its own segment 475. In this example, the recognition of the file boundary within the backup data stream allows two segments to be deduplicated and thus reduces storage utilization.
In some embodiments, the backup data streams are collections of contiguous data and may contain one or more files of data. The file or files of data stored in the data stream may correspond to data stored in an extent based file system, and the extents may or may not be in the same order in the data stream as in the original files. Using file extent mapping information to determine the partitioning of input data streams into segments may provide for improved deduplication storage methods. For additional details regarding partitioning based on file extent mapping, see U.S. patent application Ser. No. 12/338,563, filed Dec. 18, 2008, entitled “Method and Apparatus for Optimizing a De-duplication Rate for Backup Streams”, which is hereby incorporated by reference in its entirety.
In one embodiment, the data stream manager may reorder the file data and/or extents prior to partitioning, with the decision to reorder based on the metadata attributes. For example, if one file is broken into non-contiguous extents within the data stream, the extents may be reordered into their original configuration so that the file can form one data segment. This reordering of extents may be done for certain types of files, with the type of file discovered by looking at the metadata. Also, the decision to reorder extents within the backup data stream prior to partitioning may be based on other metadata attributes besides just file type.
Turning now to
The deduplication engine 610 may receive the data segments 675 and store them in its memory 645. The deduplication engine 610 may also have a processor 640, and instructions that are executed by the processor 640 may be stored in memory 645. The deduplication engine 610 may use the fingerprint generator 660 to generate fingerprints from the received data segments 675. The deduplication engine 610 may then use the search engine 655 to search for a match to the newly generated fingerprint within the fingerprint index 665. If the search engine 655 finds a match in the fingerprint index 665, then the data segment corresponding to the generated fingerprint may be discarded. If the search engine 655 fails to find a match in the fingerprint index 665, then the corresponding data segment may be sent to the backup storage 680.
The deduplication engine 610 may be a hardware and/or software-based deduplication solution. Deduplication processing may take place before or after the data segments 675 are stored in the backup storage medium 680. In either case, the deduplication engine 610 may attempt to identify previously stored data segments containing identical data. If a match is found, reference pointers to the previously stored data blocks may be stored in a tracking database. In one embodiment, the tracking database may be used to maintain a link between the discarded segment and the identical original segment already in storage. For the deduplication post-processing method, all of the data segments may be written to the backup storage medium before the search for redundant segments is performed. In this method, when a match is found between segments, the redundant segment may be deleted from the backup storage medium.
The links between the deleted data segments and the matching identical segments in storage may be handled in a variety of ways. For example, a tracking database may be used to keep track of all the stored data segments and also track the shared segments that correspond to the deleted redundant segments. In this way, the tracking database may help recreate the data during the restoration process.
After receiving the data segments 675 from the data stream manager 605, the deduplication engine 610 may generate a fingerprint for each data segment. The fingerprint may be generated using a variety of methods, including using hash functions such as MD5, SHA-1, SHA-256, narrow hash, wide hash, weak hash, strong hash, and others. In one approach, a weak hash function may generate a small fingerprint from a data segment, and if a match is found with this small fingerprint, then a strong hash function may generate a larger fingerprint. Then, this larger fingerprint may be compared to the corresponding fingerprint from the likely match. More than one comparison between fingerprints may be required to establish a match because there is a small, nonzero probability that two segments with matching fingerprints are not identical. The fingerprints for all stored segments may be stored in a fingerprint index 665, maintained and managed by the deduplication engine 610.
In one embodiment, the deduplication engine and data stream manager may be different processes running on separate computers. In another embodiment, the deduplication engine and data stream manager may run on the same computer. In a further embodiment, the deduplication engine and data stream manager may be combined into a single software process. Also, some or all of the functions typically reserved for the data stream manager may be performed by the deduplication engine, and likewise, some or all of the functions reserved for the deduplication engine may be performed by the data stream manager. For example, in one embodiment, the data stream manager may generate fingerprints for the data segments.
In various embodiments, a deduplication engine may generate fingerprints for data segments created by the data stream manager. Also, the data stream manager may send metadata information associated with the data segments to the deduplication engine. The deduplication engine may then use the metadata to determine whether or not to create a fingerprint for each specific data segment, or it may use the metadata to decide which of a plurality of fingerprint methods to use when generating a fingerprint for each data segment. For example, the deduplication engine may decide not to generate fingerprints for spreadsheet files. The deduplication engine also may maintain separate fingerprint indices, with the fingerprints categorized into indices based on specific metadata associated with the data segments from which the fingerprints were generated. Separate fingerprint indices may allow for more efficient searching for matches.
The method 700 begins in block 705. In block 710, a client or data stream generator generates a data stream and corresponding metadata and conveys the data stream and metadata to a data stream manager. The data stream manager may generally correspond to the data stream manager 285 as shown in
Turning now to
The method 800 begins in block 805, and then the data stream manager processes a portion of data from the data stream (block 810). Next, the data stream manager determines the file type by looking at the metadata associated with the portion of data (block 815). If the file is of type MP3 (conditional block 820), then the data stream manager determines the file size by looking at the metadata associated with the portion of data (block 825). If the file is not of type MP3 (conditional block 820), then the data stream manager selects a segment size to be used for partitioning based on the file type (block 855) by using the lookup table as shown in
After the data stream manager determines the file size of the MP3 file (block 825), it compares the size of the file to 16 MB. If the file size is greater than 16 MB (conditional block 830), then the data stream manager compares the size of the file to 32 MB (conditional block 840). If the file size is not greater than 16 MB (conditional block 830), then the data stream manager places the file in a segment the same size as the size of the file (block 835). If the file size is greater than 16 MB (conditional block 830), then the data stream manager compares the size of the file to 32 MB (conditional block 840). If the file size is greater than 32 MB, then the data stream manager continues the comparison and places the MP3 file in as many 16 MB segments as are needed to store the entire file, with any remainder in a segment size which may be less than 16 MB (block 850). If the file size is not greater than 32 MB (conditional block 840), then the data stream manager places the file in two segments: the first a segment of size 16 MB, and the second storing the remainder of the file (block 845). After partitioning the file into segments, in block 835, 845, 850, or 855, the data stream manager checks to see if it has reached the end of the data stream (conditional block 860). If the end of the data stream has been reached, then the method ends in block 865. If there is still data remaining in the data stream, then the data stream manager returns to block 810 to process the next portion of data from the data stream.
In other illustrative embodiments, a computer readable storage medium storing program instructions is provided. The program instructions, when executed by a computing device, cause the computing device to perform various combinations of the operations outlined above with regard to the illustrated embodiments. In various embodiments, one or more portions of the methods and mechanisms described herein may form part of a cloud computing environment. In such embodiments, resources may be provided over the Internet as services according to one or more various models. Such models may include Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). In IaaS, computer infrastructure is delivered as a service. In such a case, the computing equipment is generally owned and operated by the service provider. In the PaaS model, software tools and underlying equipment used by developers to develop software solutions may be provided as a service and hosted by the service provider. SaaS typically includes a service provider licensing software as a service on demand. The service provider may host the software, or may deploy the software to a customer for a given period of time. Numerous combinations of the above models are possible and are contemplated.
Although several embodiments of approaches have been shown and described, it will be apparent to those of ordinary skill in the art that a number of changes, modifications, or alterations to the approaches as described may be made. Changes, modifications, and alterations should therefore be seen as within the scope of the methods and mechanisms described herein. It should also be emphasized that the above-described embodiments are only non-limiting examples of implementations.
Number | Name | Date | Kind |
---|---|---|---|
5276860 | Fortier et al. | Jan 1994 | A |
5535381 | Kopper | Jul 1996 | A |
5555371 | Duyanovich et al. | Sep 1996 | A |
5559991 | Kanfi | Sep 1996 | A |
5561421 | Smith | Oct 1996 | A |
5835953 | Ohran | Nov 1998 | A |
5990810 | Williams | Nov 1999 | A |
5991542 | Han et al. | Nov 1999 | A |
6014676 | McClain | Jan 2000 | A |
6029168 | Frey | Feb 2000 | A |
6085298 | Ohran | Jul 2000 | A |
6101585 | Brown et al. | Aug 2000 | A |
6141784 | Davis | Oct 2000 | A |
6360330 | Mutalik et al. | Mar 2002 | B1 |
6389433 | Bolosky et al. | May 2002 | B1 |
6513051 | Bolosky et al. | Jan 2003 | B1 |
6542962 | Kodama et al. | Apr 2003 | B2 |
6640278 | Nolan et al. | Oct 2003 | B1 |
6665815 | Goldstein et al. | Dec 2003 | B1 |
6714952 | Dunham et al. | Mar 2004 | B2 |
6785786 | Gold et al. | Aug 2004 | B1 |
6829688 | Grubbs et al. | Dec 2004 | B2 |
6847983 | Somalwar et al. | Jan 2005 | B2 |
6865655 | Andersen | Mar 2005 | B1 |
6880051 | Timpanaro-Perrotta | Apr 2005 | B2 |
6910112 | Berkowitz et al. | Jun 2005 | B2 |
6920537 | Ofek et al. | Jul 2005 | B2 |
6938135 | Kekre et al. | Aug 2005 | B1 |
6976039 | Chefalas et al. | Dec 2005 | B2 |
6983365 | Douceur et al. | Jan 2006 | B1 |
7055008 | Niles et al. | May 2006 | B2 |
7136976 | Saika | Nov 2006 | B2 |
7146429 | Michel | Dec 2006 | B2 |
7200604 | Forman et al. | Apr 2007 | B2 |
7213158 | Bantz et al. | May 2007 | B2 |
7257104 | Shitama | Aug 2007 | B2 |
7257643 | Mathew | Aug 2007 | B2 |
7310644 | Adya et al. | Dec 2007 | B2 |
7318072 | Margolus | Jan 2008 | B2 |
7359920 | Rybicki et al. | Apr 2008 | B1 |
7389394 | Karr | Jun 2008 | B1 |
7401194 | Jewell | Jul 2008 | B2 |
7409523 | Pudipeddi | Aug 2008 | B2 |
7424514 | Noble et al. | Sep 2008 | B2 |
7454592 | Shah | Nov 2008 | B1 |
7478113 | De Spiegeleer et al. | Jan 2009 | B1 |
7529785 | Spertus et al. | May 2009 | B1 |
7661121 | Smith et al. | Feb 2010 | B2 |
8315985 | Ohr et al. | Nov 2012 | B1 |
8655939 | Redlich et al. | Feb 2014 | B2 |
20010045962 | Lee | Nov 2001 | A1 |
20020049718 | Kleiman et al. | Apr 2002 | A1 |
20020107877 | Whiting et al. | Aug 2002 | A1 |
20030163495 | Lanzatella et al. | Aug 2003 | A1 |
20030177149 | Coombs | Sep 2003 | A1 |
20040044707 | Richard | Mar 2004 | A1 |
20040143731 | Audebert et al. | Jul 2004 | A1 |
20040268068 | Curran et al. | Dec 2004 | A1 |
20050027766 | Ben | Feb 2005 | A1 |
20050198328 | Lee et al. | Sep 2005 | A1 |
20050204108 | Ofek | Sep 2005 | A1 |
20050216788 | Mani-Meitav et al. | Sep 2005 | A1 |
20050216813 | Cutts et al. | Sep 2005 | A1 |
20060008256 | Khedouri et al. | Jan 2006 | A1 |
20060026219 | Orenstein et al. | Feb 2006 | A1 |
20060184587 | Federwisch et al. | Aug 2006 | A1 |
20060271540 | Williams | Nov 2006 | A1 |
20070016560 | Gu et al. | Jan 2007 | A1 |
20070083491 | Walmsley et al. | Apr 2007 | A1 |
20070192548 | Williams | Aug 2007 | A1 |
20070198659 | Lam | Aug 2007 | A1 |
20070250674 | Fineberg et al. | Oct 2007 | A1 |
20080005141 | Zheng et al. | Jan 2008 | A1 |
20080133561 | Dubnicki et al. | Jun 2008 | A1 |
20080154989 | Arman | Jun 2008 | A1 |
20080177799 | Wilson | Jul 2008 | A1 |
20080228939 | Samuels et al. | Sep 2008 | A1 |
20080243769 | Arbour et al. | Oct 2008 | A1 |
20080243953 | Wu et al. | Oct 2008 | A1 |
20080244204 | Cremelie et al. | Oct 2008 | A1 |
20080313207 | Modad et al. | Dec 2008 | A1 |
20090228520 | Yahata et al. | Sep 2009 | A1 |
20090313248 | Balachandran et al. | Dec 2009 | A1 |
20100082695 | Hardt | Apr 2010 | A1 |
20100083003 | Spackman | Apr 2010 | A1 |
20100146013 | Mather | Jun 2010 | A1 |
20100250896 | Matze | Sep 2010 | A1 |
20100257403 | Virk et al. | Oct 2010 | A1 |
20100274982 | Mehr et al. | Oct 2010 | A1 |
20100332452 | Hsu et al. | Dec 2010 | A1 |
Number | Date | Country |
---|---|---|
838758 | Apr 1998 | EP |
Entry |
---|
U.S. Appl. No. 11/404,105 entitled Routing, filed Apr. 13, 2006. |
U.S. Appl. No. 11/403,379 entitled Parallel Backup, filed Apr. 13, 2006. |
U.S. Appl. No. 11/641,389, filed Dec. 18, 2006 entitled “Single Instance Storage”. |
“Windows DDK Glossary,” http://msdn.microsoft.com/library/default.asp?url=/library/en-us/gloss/hh- /gloss/glossary.sub.--628b1dfc-c8f0-4143-a4ef-0dddae24be4b.xml.asp, (3 pages). |
“Repliweb R-1 User Guide—Version 3.1,” RepliWeb, Inc., 2002, (27 pages). |
“FilesX Xchange RestoreTM for Microsoft Exchange Server,” FilesX, Inc., Aug. 2003, (2 pages). |
“Instructor Articles,” Veritas Education, pp. 1-7, Apr. 2003. |
“EMC TimeFinder Family,” EMC Corporation, 8 pages, Oct. 2004. |
“EMC TimeFinder Local Replication,” EMC Corporation, 2 pages, Oct. 2004. |
“Storage Area Networking-High-Speed Data Sharing Among Multiple Computer Platforms”, Tivoli Systems, Inc., Copyright 2000. ftp://ftp.software.ibm.com/software/tivoli/whitepapers/san—datasharing—wp.pdf, (2000), 4 pages. |
“Storage Management—Best Practices”, Copyright 2001, IBM Corp., ftp://ftp.software.ibm.com/software/tivoli/whitepapers/wp-storage-bp.pdf, (2001), 11 pages. |
Amiri, Khalil S., “Scalable and manageable storage systems”, Ph.D. Thesis, Dept. of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, http://www.pdl.cmu.edu/PDL-FTP/NASD/amiri—thesis.pdf, (Dec. 2000), i-241 pgs. |
Wylie, Jay J., et al., “Selecting the Right Data Distribution Scheme for a Survivable Storage System”, Research Paper, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, http://www.pdl.cmu.edu/PDL-FTP/Storage/CMU-CS-01-120.pdf, May 2001), pp. 1-21. |