The present invention relates to snapshots of file systems in data storage systems.
Files exist to store information on storage devices (e.g., magnetic disks) and allow the information to be retrieved later. A file system is a collection of files and directories plus operations on them. To keep track of files, file systems have directories. A directory entry provides the information needed to find the blocks associated with a given file. Many file systems today are organized in a general hierarchy (i.e., a tree of directories) because it gives users the ability to organize their files by creating subdirectories. Each file may be specified by giving the absolute path name from the root directory to the file. Every file system contains file attributes such as each file owner and creation time and must be stored somewhere such as in a directory entry.
A snapshot of a file system will capture the content (i.e., files and directories) at an instant in time. A snapshot results in two data images: (1) the active data that an application can read and write as soon as the snapshot is created and (2) the snapshot data. Snapshots can be taken periodically, hourly, daily, or weekly or on user demand. They are useful for a variety of applications including recovery of earlier versions of a file following an unintended deletion or modification, backup, data mining, or testing of software.
The need for high data availability often requires frequent snapshots that consume resources such as memory, internal memory bandwidth, storage device capacity and the storage device bandwidth. Some important issues for snapshots of file systems is how to manage the allocation of space in the storage devices, how to keep track of the blocks of a given file, and how to make snapshots of file systems work efficiently and reliably.
The invention provides methods and systems for management of snapshots of a file system. In a first aspect of the invention, a snapshot management system performs a method for managing multiple snapshots and an active file system by (a) maintaining an index table that contains an entry for each snapshot and the active file system; and (b) maintaining space map block entry (b, e) where b and e represent index table entries, b indicates a first snapshot that uses the first block and e indicates a last snapshot that uses the first block.
In a second aspect of the invention, a snapshot management system, including a processor, for maintaining multiple snapshot versions and an active file system, comprises: (a) an index table that contains an entry for each snapshot and the active file system; (b) a space map block including space map block entry (b, e), wherein b and e represent index table entries, b indicates a first snapshot that uses the first block, and e indicates a last snapshot that uses the first block; and (c) a usable space for storing the snapshot versions and the active file system.
In another aspect of the invention, a method of snapshot management maintains multiple snapshot versions and an active file system, comprising: (a) maintaining a space map block entry (b, e), wherein b and e represent index table entries, b indicates a first snapshot that uses a first block, and e indicates a last snapshot that uses the first block; and (b) maintaining a snapspace matrix that counts the occurrences of (b, e) for every space map block entry.
In another aspect of the invention, a snapshot management system, including a processor, for maintaining multiple snapshot versions and an active file system, comprises an index table that contains an entry for each snapshot and the active file system, a space map block entry (b, e), wherein b and e represent index table entries, b indicates a first snapshot that uses the first block, and e indicates a last snapshot that uses the first block and a usable space for storing the snapshot versions and the active file system.
In another aspect of the invention, a method determines if a block was modified in a file system by comparing the versions of the base snapshot, the delta snapshot, and the space map block entry (b, e).
In another aspect of the invention, a method searches for modified blocks in a tree structured file system.
a illustrates a diagram of an active file system with a request to revert to an earlier snapshot.
b illustrates a diagram of an active file system on hold to obsolete snapshots after the earlier snapshot.
c illustrates a diagram of the cleaning of the obsolete snapshots.
d illustrates a diagram of the file system after reversion to the earlier snapshot.
a-22g illustrate block modifications with respect to a base snapshot and a delta snapshot.
The following description includes the best mode of carrying out the invention. The detailed description is made for the purpose of illustrating the general principles of the invention and should not be taken in a limiting sense. The scope of the invention is determined by reference to the claims. Each part is assigned its own part number throughout the specification and drawings.
In an embodiment, the first host includes a motherboard with a CPU-memory bus 14 that communicates with dual processors 12 and 41. The processor used is not essential to the invention and could be any suitable processor such as the Intel Pentium 4 processor. A processor could be any suitable general purpose processor running software, an ASIC dedicated to perform the operations described herein or a field programmable gate array (FPGA). Also, one could implement the invention using a single processor in each host or more than two processors to meet more stringent performance requirements. The arrangement of the processors is not essential to the invention.
The first host cache memory 20 includes a cache manager 13, a cache directory 15, and cache lines 16. The cache memory 20 is nonvolatile memory or volatile memory or a combination of both. Nonvolatile memory protects data in the event of a power interruption or a host failure. Data is defined as including user data, instructions, and metadata. Nonvolatile memory may be implemented with a battery that supplies power to the DRAM to make it nonvolatile memory when a conventional external power interrupt circuit detects a power interruption or with inherently nonvolatile semiconductor memory.
Each host includes a bus adapter 22 between the CPU-memory bus 14 and an interface bus 24. Each host runs an operating system such as Linux, UNIX, a Windows OS, or another suitable operating system. Tanenbaum, Modern Operating Systems (2001) describes operating systems in detail and is hereby incorporated by reference. The first host is representative of the other hosts, but this feature is not essential to the invention.
The first host can communicate with the second host through an interconnect 40, shown as connected to an adapter 25 to the interface bus 24. The PCI bus is one suitable interface bus and the interconnect 40 may be any suitable known bus, SAN, LAN, or WAN technology. In an embodiment, the interconnect 40 is a dedicated Fibre Channel (FC) point-to-point link that connects to FC-PCI bus adapter 25 to provide fast point-to-point communication between the hosts.
In an alternative embodiment, the interconnect network 30 such as a FC fabric provides extra bandwidth for host-to-host communications. In this embodiment, links 28, 38 connect to the interconnect network 30 and the hosts use link 28 and link 38 when available. FC standard software can set priority levels to ensure high priority peer-to-peer requests, but there will still be some arbitration overhead and latency in claiming ownership of the links. For example, if links 28 and 38 are busy transferring data when a write request arrives, that operation must complete before either link is free for arbitration.
If the interconnect 40 ever fails, communication between hosts can be handled using the interconnect network 30. The interconnect network 30 can be implemented by interconnects used in data storage systems such as Fibre Channel, SCSI, InfiniBand, or Ethernet, and the type of interconnect is not essential to the invention. In either embodiment, redundant communication between hosts ensures the data storage system has high availability. See Clark, IP SANs: A Guide to ISCSI, iFCP, and FCIP Protocols for Storage Area Networks (2002) and Clark, Designing Storage Area Networks (1999) are incorporated herein by reference.
In an embodiment, the data storage subsystems shown in
As shown in
Each index table includes an index value of the active file system 17 permitting fast location of the active file system. The index table includes a known algorithm to verify the data integrity such as a checksum 18, a cyclic redundancy check, or a digital signature. The index table provides an index to the snapshots and the active file system. Each entry in the index table represents a snapshot or the active file system. As illustrated, the index range is 1-255, but this range is not essential to the invention. In various embodiments, each snapshot and the active file system has one or more associated attributes such as a version number 19, timestamp 23 and/or image name 29 to identify the snapshot or active file system, an image state 21, a root block pointer 27 , as described below.
When the data storage system takes a snapshot of the file system it assigns the snapshot a unique version number such as a 32-bit unsigned integer that increases monotonically for each subsequent snapshot. The version number is not reused even as snapshots are deleted or made obsolete to the file system.
The image state can be one of the following states:
In an embodiment, when the data storage system takes a snapshot of the file system, the host provides a timestamp (e.g., time and date) when the snapshot or active data image was created. The root block pointer provides the address of the root block in the hierarchical structure of the snapshot and the image name is a character string used to easily identify the snapshot to users.
Referring to
In an alternative embodiment, each space map block entry contains a pair of version numbers (e.g., 32-bit) that represent snapshots or the active file system. Thus, each version pair (b, e) in the space map block would be used to track the usage of an associated block in the usable space.
Referring to step 105 of
At step 44, if the space map block entry of the block associated with the received data indicates an in-use snapshot uses the block, that is, the space map block entry (b, 0), the host allocates a free-to-use block for the received data at step 60. At step 62, the host adds the received data to the new allocated block. At step 63, the host changes the space map block entry of the new allocated block from (0, 0)→(a, 0) indicating the new block is used by the active file system only. At step 64, the host updates the file system block pointers to point to the new data. At step 66, the host determines if there are other in-use snapshots pointing to the same old block. If the index b is associated with the latest snapshot version number, there is no other in-use snapshots pointing to the same old block. Therefore, at step 67, the host updates the old space map block entry from (b, 0)→(b, b) indicating snapshot b is the only snapshot pointing to the associated old block and that the old data has been modified since snapshot b was created. If the index b is not associated with the latest snapshot version number, there is another in-use snapshot pointing to the same old block. Therefore, at step 68, the host updates the old space map block entry from (b, 0)→(b, e) to indicate that snapshot b is the beginning snapshot and snapshot e is the ending snapshot (i.e., current in-use snapshot with the latest snapshot version number) pointing to the associated old block. In this case, there may be other snapshots with version numbers less than snapshot e and greater than snapshot b pointing to the same old block. In either case, the block management routine returns to normal system operation at step 58.
a illustrates a flow chart for a method to delete a snapshot. At step 75, after receiving a request to delete a snapshot (see also
b illustrates a high level flow chart for cleaning deleted and obsolete snapshots from the space map blocks and index table of the file system. At step 79, the host determines if any obsolete snapshots exist. If yes, the host goes to reference A in
If so, step 314 tests if the beginning index of the space map block entry indicates a snapshot later than the reverted-to snapshot p and the ending index indicates an obsolete snapshot earlier than the copy snapshot c. If so, step 316 sets the space map block entry to (0, 0) to indicate that the entry is free-to-use since no snapshot any longer references it.
If neither of the conditions tested by steps 310 or 314 are true, then step 318 leaves the space map block entry unchanged.
After executing step 312, 316, or 318, step 306 tests if we have processed the last space map block entry in the file system. If we have processed the last entry, processing continues at Reference J on
After completing the processing of all obsolete snapshots in the space map blocks, processing continues at Reference J on
Returning to
Step 640 similarly tests the ending index of the space map block entry to see if it references a deleted snapshot. If so, step 650 tests if there is a snapshot with version less than the current ending index and later than or equal to the version of the beginning index. If not, step 680 sets the space map block entry to (0, 0) to indicate that the block is free-to-use. Otherwise, step 660 sets the ending index to the latest in-use snapshot before the current ending index.
After completion of either step 660 or 680, step 670 tests for another space map block entry. If there are more space map block entries to process, control returns to step 610. After all space map block entries have been processed, control resumes at Reference K on
At times, a user may want to free storage space in the file system. Because some data may not be deleted without prior consent, a user administering a data storage system may seek a quicker way to get more storage space. For example, the user may be curious how much space will be freed if he deletes older snapshots. However, since the present invention provides snapshots that share blocks and different snapshots share varying amounts of space with each other and with the active file system, it may not be apparent how much space will be freed by deleting a given snapshot.
The invention enables a user to determine in advance how much freeable space will be acquired by deleting a given snapshot.
Referring to
If the user decides instead he wants to start all over again in selecting snapshots for deletion without leaving the user interface for snapshot management, he can interface with another graphical element (e.g., clear selections) and all snapshot selections will be cleared (e.g., marks in the checkbox erased).
After all selections are made as illustrated by the two snapshots in
To present this information in the user interface, the file system maintains the snapshot space statistics in the following manner. The file system will scan all the space map blocks at time intervals and count the number of each type of space map block entry in the space map blocks. Because space map block entries serve as an index to a block in user data space, the blocks can be related to each snapshot. In an embodiment, the invention stores the free space information after a scan (e.g., a scan to free blocks from deleted or obsolete snapshots) and keeps the free space information up to date during operation and with creation and deletion of snapshots.
To keep track of the blocks associated with each snapshot, the file system provides a data structure referred to as snapspace matrix or simply snapspace.
As shown in
Operations that scan and update the space map blocks to remove deleted and obsolete snapshots update the snapspace matrix as described earlier for normal operations. As shown in
File system utilities can use the snapspace matrix to determine the number of blocks a user will free by deleting a snapshot. In one case, snapspace [s,s] indicates the number of blocks that deleting snapshot s will free. As the user considers the deletion of more snapshots, the file system takes into account the cumulative effect of deleting a set of snapshots. An embodiment can simply copy the snapspace matrix and update the copy accordingly as the user considers deleting various snapshots.
In another aspect, the invention provides a snapspace matrix that reduces the required memory needed to hold the elements of snapspace matrix updated during normal operations. During normal operation with active index a and the most recent snapshot having index r, the file system changes the space map block entries to (b, r) and allocates new space with entries of the form (a, 0). If we arrange snapspace by columns and put snapspace [b, e] adjacent to snapspace [b+1, e] then we need to keep in memory only 2×256×8 bytes or 4,096 bytes.
In an embodiment, an array is prepared in advance that contains the timestamps of undeleted snapshots sorted in ascending order. The search for undeleted snapshots with a timestamp between TSB and TSE at step 714 is performed by a binary search of the array of timestamps for any timestamp at least as large as TSB and no larger than TSE.
While the method of
An enterprise may want to protect data contained in its file system by storing a remote copy of the file system off-site if the primary data storage system fails or in the event of a local disaster. Data replication can provide this protection by transmitting the primary file system over a network to a secondary data storage system.
The primary data storage system's file system is actively modified. The primary data storage maintains a base snapshot of the active file system that represents the contents of the file system of the secondary data storage system. To bring the secondary file system up-to-date after modifications to the blocks of the primary file system, the primary data storage system will periodically (e.g., hourly or daily or weekly) take a delta snapshot, examine the space map block entries of the file system to identify the modified blocks between the base snapshot and the delta snapshot, and transmit the modified blocks from the primary data storage system to the secondary data storage system.
An enterprise may also protect data in its file system by only backing up the blocks that have been modified since the last back up. The invention provides an efficient way to find the modified blocks.
a through 22g shows the relationship between a block that has an associated space map block entry (b, e) and a base snapshot and a delta snapshot. These relationships explain whether the block has been modified after the base snapshot and is still in use in the delta snapshot and therefore contains new or modified information associated with the delta snapshot.
In
In
In
In
In
In
In
If the snapshot version corresponding to the entry e is greater than or equal to the delta snapshot version at step 825, the method tests if the snapshot version corresponding to the entry b is less than or equal to the delta snapshot version at step 826. If not, the method determines that the block was modified after the delta snapshot (see
If the block is in the file system, the method tests if the block number is a space map block at step 706. If yes, at step 713, the method reads the spacemap block version. At step 722, the method tests if the version of the space map block is greater than the version of the base snapshot. If yes, the method proceeds to step 712 and outputs the block number of the modified block. If not, the method increments the block number at step 714 and resumes at step 704.
If step 706 determines that the block number is not a space map block, the method proceeds to step 710 that determines if the block was modified after the base snapshot and before the delta snapshot (
At step 733, the method reads the base snapshot and the delta snapshot versions. At step 734, the method reads (b, e) from the space map block entry that corresponds to the root block of the tree data structure.
At step 736, the method determines if the root block was modified between the base snapshot and the delta snapshot using the method of
Next, the method proceeds to step 740 and determines if the root block is a leaf block (i.e., has no descendants). If so, the method terminates at step 744. If not, the method proceeds to step 742 where the method performs steps 734, 736, 740, 742, and 746 on the direct children of the root block.
This application is a continuation-in-part of U.S. application Ser. No. 11/407,491, Management of File System Snapshots, filed on Apr. 19, 2006, now U.S. Pat. No. 7,379,954 B2, which is a continuation-in-part of U.S. application Ser. No. 11/147,739, Methods of Snapshot and Block Management in Data Storage Systems, filed on Jun. 7, 2005, now U.S. Pat. No. 7,257,606 B2, which is a continuation of U.S. application Ser. No. 10/616,128, Snapshots of File Systems in Data Storage Systems, filed on Jul. 8, 2003, now U.S. Pat. No. 6,959,313 B2, which are all incorporated by reference herein. This application also incorporates by reference herein as follows: U.S. application Ser. No. 10/264,603, Systems and Methods of Multiple Access Paths to Single Ported Storage Devices, filed on Oct. 3, 2002, now abandoned; U.S. application Ser. No. 10/354,797, Methods and Systems of Host Caching, filed on Jan. 29, 2003, now U.S. Pat. No. 6,965,979 B2; U.S. application Ser. No. 10/397,610, Methods and Systems for Management of System Metadata, filed on Mar. 26, 2003, now U.S. Pat. No. 7,216,253 B2; U.S. application Ser. No. 10/440,347, Methods and Systems of Cache Memory Management and Snapshot Operations, filed on May 16, 2003, now U.S. Pat. No. 7,124,243 B2; U.S. application Ser. No. 10/600,417, Systems and Methods of Data Migration in Snapshot Operations, filed on Jun. 19, 2003, now U.S. Pat. No. 7,136,974 B2; U.S. application Ser. No. 10/677,560, Systems and Methods of Multiple Access Paths to Single Ported Storage Devices, filed on Oct. 1, 2003, now abandoned; U.S. application Ser. No. 10/696,327, Data Replication in Data Storage Systems, filed on Oct. 28, 2003, now U.S. Pat. No. 7,143,122 B2; U.S. application Ser. No. 10/837,322, Guided Configuration of Data Storage Systems, filed on Apr. 30, 2004, now U.S. Pat. No. 7,216,192 B2; U.S. application Ser. No. 10/975,290, Staggered Writing for Data Storage Systems, filed on Oct. 27, 2004, now U.S. Pat. No. 7,380,157 B2; U.S. application Ser. No. 10/976,430, Management of I/O Operations in Data Storage Systems, filed on Oct. 29, 2004, now U.S. Pat. No. 7,222,223 B2; U.S. application Ser. No. 11/122,495, Quality of Service for Data Storage Volumes, filed on May 4, 2005 now U.S. Pat. No. 7,418,531 B2; U.S. application Ser. No. 11/245,718, A Multiple Quality of Service File System, filed on Oct. 8, 2005 , now abandoned; and U.S. application Ser. No. 11/408,209, Methods and Systems of Cache Memory Management and Snapshot Operations, filed on Apr. 19, 2006, now U.S. Pat. No. 7,380,059 B2.
Number | Name | Date | Kind |
---|---|---|---|
5317731 | Dias et al. | May 1994 | A |
5664186 | Bennett et al. | Sep 1997 | A |
5819292 | Hitz et al. | Oct 1998 | A |
6038639 | O'Brien et al. | Mar 2000 | A |
6085298 | Ohran | Jul 2000 | A |
6205450 | Kanome | Mar 2001 | B1 |
6247099 | Skazinski et al. | Jun 2001 | B1 |
6289356 | Hitz | Sep 2001 | B1 |
6311193 | Seikdo | Oct 2001 | B1 |
6484186 | Rungta | Nov 2002 | B1 |
6490659 | McKean et al. | Dec 2002 | B1 |
6636878 | Rudoff | Oct 2003 | B1 |
6636879 | Doucette et al. | Oct 2003 | B1 |
6732125 | Autrey et al. | May 2004 | B1 |
6883074 | Lee et al. | Apr 2005 | B2 |
6938134 | Madany | Aug 2005 | B2 |
6959313 | Kapoor et al. | Oct 2005 | B2 |
6978353 | Lee et al. | Dec 2005 | B2 |
7072916 | Lewis et al. | Jul 2006 | B1 |
7111021 | Lewis et al. | Sep 2006 | B1 |
7237080 | Green et al. | Jun 2007 | B2 |
7257606 | Kapoor et al. | Aug 2007 | B2 |
7454445 | Lewis et al. | Nov 2008 | B2 |
7603391 | Federwisch et al. | Oct 2009 | B1 |
7631018 | Lee et al. | Dec 2009 | B2 |
20020049718 | Kleiman et al. | Apr 2002 | A1 |
20020083037 | Lewis et al. | Jun 2002 | A1 |
20020091670 | Hitz et al. | Jul 2002 | A1 |
20020133735 | McKean et al. | Sep 2002 | A1 |
20030018878 | Dorward et al. | Jan 2003 | A1 |
20040133602 | Kusters et al. | Jul 2004 | A1 |
Number | Date | Country |
---|---|---|
WO 0229573 | Apr 2002 | WO |
Number | Date | Country | |
---|---|---|---|
20090006496 A1 | Jan 2009 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10616128 | Jul 2003 | US |
Child | 11147739 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11407491 | Apr 2006 | US |
Child | 12154494 | US | |
Parent | 11147739 | Jun 2005 | US |
Child | 11407491 | US |