At least one embodiment of the present invention pertains to storage systems, and more particularly, to a method and apparatus for using application-level context information to detect corrupted data in a storage system.
A storage server is a special-purpose processing system used to store and retrieve data on behalf of one or more client processing systems (“clients”). A storage server can be used for many different purposes, such as to provide multiple users with access to shared data or to backup mission critical data.
A file server is an example of a storage server. A file server operates on behalf of one or more clients to store and manage shared files in a set of mass storage devices, such as magnetic or optical storage based disks or tapes. The mass storage devices may be organized into one or more volumes of Redundant Array of Inexpensive Disks (RAID). Another example of a storage server is a device which provides clients with block-level access to stored data, rather than file-level access, or a device which provides clients with both file-level access and block-level access.
In a large scale storage system, it is inevitable that data will become corrupted from time to time. Consequently, virtually all modern storage servers implement various techniques for detecting and correcting errors in data. RAID schemes, for example, include built-in techniques to detect and, in some cases, to correct corrupted data. Error detection and correction is often performed by using a combination of checksums and parity. Error correction can also be performed at a lower level, such as at the disk level.
In file servers and other storage systems, occasionally a write operation executed by the server may fail to be committed to the physical storage media, without any error being detected. The write is essentially “lost” somewhere between the server and the storage media. This type of the fault is typically caused by faulty hardware in a disk drive or in a disk drive adapter dropping the write silently without reporting any error. It is desirable for a storage server to be able to detect and correct such “lost writes” any time data is read.
While modern storage servers employ various error detection and correction techniques, these approaches are inadequate for purposes of detecting this type of error. For example, in one well-known class of file server, files sent to the file server for storage are first broken up into 4 KByte blocks, which are then formed into groups that are stored in a “stripe” spread across multiple disks in a RAID array. Just before each block is stored to disk, a checksum is computed for that block, which can be used when that block is subsequently read to determine if there is an error in the block. In one known implementation, the checksum is included in a 64 Byte metadata field that is appended to the end of the block when the block is stored. The metadata field also contains: a volume block number (VBN) which identifies the logical block number where the data is stored (since RAID aggregates multiple physical drives as one logical drive); a disk block number (DBN) which identifies the physical block number within the disk in which the block is stored; and an embedded checksum for the metadata field itself This error detection technique is referred to as “block-appended checksum” to facilitate discussion.
Block-appended checksum can detect corruption due to bit flips, partial writes, sector shifts and block shifts. However, it cannot detect corruption due to a lost block write, because all of the information included in the metadata field will appear to be valid even in the case of a lost write.
Parity in single parity schemes such as RAID-4 or RAID-5 can be used to determine whether there is a corrupted block in a stripe due to a write. This can be done by comparing the stored and computed values of parity, and if they do not match, the data may be corrupt. However, in the case of single parity schemes, while a single ad block can be reconstructed from the parity and remaining data blocks, there is not enough information to determine which disk contains the corrupted block in the stripe. Consequently, the corrupted data block cannot be recovered using parity.
With RAID Double Parity (RAID-DP), a technique invented by Network Appliance Inc. of Sunnyvale, California, a single bad block in a stripe can be detected and corrected, or two bad blocks can be detected without correction. It is desirable, to be able to detect and correct an error in any block anytime there is a read of that block. However, checking parity in both RAID-4 and RAID-DP is “expensive” in terms of computing resources, and therefore is normally only done when operating in a “degraded mode”, i.e., when an error has been detected, or when scrubbing parity (normally, the parity information is simply updated when a write is done). Hence, using parity to detect a bad block on file system reads is not a practical solution, because it can cause potentially severe performance degradation.
Read-after-write is another known mechanism to detect data corruption. In that approach, a data block is read back immediately after writing it and is compared to the data that was written. If the data read back is not the same as the data that was written, then this indicates the write did not make it to the storage block. Read-after-write can reliably detect corrupted block due to lost writes, however, it also has a severe performance impact, because every write operation is followed by a read operation.
What is needed, therefore, is a technique for detecting lost writes in a storage system, which overcomes the shortcomings of the above-mentioned approaches.
The present invention includes a method which includes, in response to a request to perform a write operation that affects a data block, writing to a storage device the data block together with context information which uniquely identifies the write operation with respect to the data block. The method further includes reading the data block and the context information together from the storage device, and using the context information that was read with the data block to determine whether the data block is valid.
The invention further includes a system and apparatus that can perform such a method.
Other aspects of the invention will be apparent from the accompanying figures and from the detailed description which follows.
One or more embodiments of the present invention are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
A method and apparatus for efficiently detecting lost writes and other similar errors in a storage system are described. As described in greater detail below, in certain embodiments of the invention the method includes using file system context information about stored data to detect lost writes. More specifically, file system context information about a data block is stored in a metadata entry appended to the data block when the data block is written. Later, when the data block is read from storage, the context information stored in the metadata entry is compared with the corresponding context information from the file system for the data block. Any mismatch between the context information stored in the metadata entry and the corresponding context information from the file system indicates that the data in the storage block has not been updated due to lost write, and is therefore invalid, in which case the data can be reconstructed using parity and the data in the remaining disks. One advantage of this technique is that it allows detection of a lost write anytime the affected data block is read.
This technique can be implemented with a file system that does not allow the same physical storage location to be overwritten when a data block is modified, such as the WAFL file system made by Network Appliance, Inc. In such a system, the technique introduced herein has no adverse performance impact, because the context information must be read anyway by the file system from each indirect block associated with a data block on every write of that data block. Therefore, simply writing this context information with the data block does not degrade performance.
As noted, the error detection technique introduced herein can be implemented in a file server.
The file server 2 in
The file server 2 may have a distributed architecture; for example, it may include a separate N-(“network”) blade and D-(disk) blade (not shown). In such an embodiment, the N-blade is used to communicate with clients 1, while the D-blade includes the file system functionality and is used to communicate with the storage subsystem 4. The N-blade and D-blade communicate with each other using an internal protocol. Alternatively, the file server 2 may have an integrated architecture, where the network and data components are all contained in a single box. The file server 2 further may be coupled through a switching fabric to other similar file servers (not shown) which have their own local storage subsystems. In this way, all of the storage subsystems can form a single storage pool, to which any client of any of the file servers has access.
The processors 21 are the central processing units (CPUs) of the file server 2 and, thus, control the overall operation of the file server 2. In certain embodiments, the processors 21 accomplish this by executing software stored in memory 22. A processor 21 may be, or may include, one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic devices (PLDs), or the like, or a combination of such devices.
Memory 22 is or includes the main memory of the file server 2. Memory represents any form of random access memory (RAM), read-only memory (ROM), flash memory, or the like, or a combination of such devices. Memory 22 stores, among other things, the operating system 24 of the file server 2, in which the error detection techniques introduced above can be implemented.
Also connected to the processors 21 through the bus system 23 are one or more internal mass storage devices 25, a storage adapter 26 and a network adapter 27. Internal mass storage devices 25 may be or include any conventional medium for storing large volumes of data in a non-volatile manner, such as one or more magnetic or optical based disks. The storage adapter 26 allows the file server 2 to access the storage subsystem 4 and may be, for example, a Fibre Channel adapter or a SCSI adapter. The network adapter 27 provides the file server 2 with the ability to communicate with remote devices, such as the clients 1, over a network and may be, for example, an Ethernet adapter.
Also logically under the file system 31, the operating system 24 includes a storage access layer 34 and an associated storage driver layer 35, to allow the file server 2 to communicate with the storage subsystem 4. The storage access layer 34 implements a higher-level disk storage protocol, such as RAID, while the storage driver layer 35 implements a lower-level storage device access protocol, such as Fibre Channel Protocol (FCP) or SCSI. To facilitate description, it is henceforth assumed herein that the storage access layer 34 implements a RAID protocol, such as RAID-4, and therefore may alternatively be referred to as RAID layer 34.
Also shown in
As shown in
The error detection technique introduced herein will now be described in greater detail with reference to
The technique introduced herein, according to certain embodiments, builds upon the “block-appended checksum” technique. Just before each block is stored to disk, a checksum is computed for the block, which can be used during a subsequent read to determine if there is an error in the block. The checksum is included in a metadata field that is appended to the end of the block just before the block is stored to disk. In certain embodiments, the metadata field appended to each 4 Kbyte block is 64 bytes long. The metadata field also contains a volume block number (VBN), which identifies the logical disk in which the block is stored, a disk block number (DBN), which identifies the physical block number within the VBN in which the block is stored, and an embedded checksum for the block-appended checksum itself.
In accordance with the invention, context information from the file system is also included in the metadata field. The context information is information which describes the context of the data block. In particular, the context information uniquely identifies a specific write operation relative to the block being stored, i.e., information which can be used to distinguish the write of that block from a prior write of that block.
In certain embodiments, the file server 2 uses inodes to keep track of stored data. For purposes of this description, the term “inode” is used here in essentially the same manner as in a UNIX-based system. More specifically, an inode is a data structure, stored in an inode file, that keeps track of which logical blocks of data in the storage subsystem 4 are used to store each file. Normally, each stored file is represented by a corresponding inode. A data block can be referenced directly by an inode. More commonly, however, as shown in
According to certain embodiments, the context information generated by the file system 31 is stored in the block appended metadata associated with each stored block. In other embodiments, however, the context information may be incorporated into the data block itself To facilitate description, however, the remainder of this description assumes that the context information is stored in the block-appended metadata field.
In certain embodiments, the context information includes the file block number (FEN) of the data block and the inode number of the data block. The FBN is the offset of the data block within the file to which the data block belongs. The FBN and the inode number may both be 4 byte words, for example. The context information may also include a generation number for the data block, as explained below.
The context information for a data block should uniquely identify a particular write to the data block. In certain embodiments, the file system 31 does not allow the same physical storage location to be overwritten when a data block is modified; instead, the data block is written to a different physical location each time it is modified. In such embodiments, the FBN and inode number are sufficient to uniquely identify the data block and, moreover, to uniquely identify a particular write of that data block.
Note that in a real storage system, the number of blocks is not unlimited. That means that sooner or later the storage system will have to [re]use blocks that it used in the past but freed as the changed data was written to a different disk block. In such systems (the WAFL file system made by Network Appliance Inc. is one example), the probability of a block being reused for the exact same context can be small enough for the technique described here to be useful.
If the implementation permits data blocks to be overwritten in place, it is necessary to use additional context information to uniquely identify a particular write of a particular data block. In such implementations, the generation number can be used for that purpose. The generation number is an increasing counter used to determine how many times the data block has been written in place. Note, however, that use of a generation number may adversely impact performance, since all indirect blocks must be updated each time the generation number is updated.
The file system 31 manages the context information of the data and passes that information down to the storage access (e.g., RAID) layer 34 with the data on read and write operations. The storage access layer 34 stores the context information in the block-appended metadata field on writes. On reads the storage access layer 34 extracts the context information from the metadata field and, in certain embodiments, compares it with corresponding context information passed down by the file system 31 for that data block. In other embodiments, the storage access layer 34 simply passes the extracted context information up to the file system 31, which does the comparison. In either case, if there is a mismatch, the data block is determined to be corrupted. In that case, the data block is reconstructed, and the reconstructed data block is written back to disk.
As shown in
In certain situations in a file system it may be necessary to move all or some of the blocks of one inode to another inode. This may be done for any of various reasons that are not germane to the invention, such as for file truncation purposes. If a file or a portion thereof is moved to another inode, the inode number stored in the metadata field 61 will become invalid. Consequently, in embodiments which permit the reassignment of data blocks from one inode to another, an artificial identifier, referred to as bufftree ID, is substituted for the mode number in the metadata field. For this purpose, what is needed is an identifier that is associated with the blocks of an inode rather than the inode itself. The bufftree ID can be a random number, generated and stored inside the inode when the inode is allocated its first block. When an mode inherits some or all of the blocks from another inode, it also inherits the bufftree ID of that inode. Hence, the bufftree ID stored in the metadata field 61 for a given data block will remain valid even if the data block is moved to a new inode.
Thus, a method and apparatus for efficiently detecting lost vanes in a storage system have been described. Although the present invention has been described with reference to specific exemplary embodiments, it will be recognized that the invention is not limited to the embodiments described, but can be practiced with modification and alteration within the spirit and scope of the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative sense rather than a restrictive sense.
This application is a continuation of co-pending U.S. application Ser. No. 10/951,644, filed Sep. 27, 2004 which is also related to U.S. patent application Ser. No. 09/696,666, filed on Oct. 25, 2000 and entitled, “Block-Appended Checksums.” and U.S. patent application Ser. No. 10/152,448, filed on May 21, 2002 and entitled, “System and Method for Emulating Block-Appended Checksums on Storage Devices by Sector Stealing”.
Number | Date | Country | |
---|---|---|---|
Parent | 10951644 | Sep 2004 | US |
Child | 13918428 | US |