1. Field of the Invention
The invention relates to storage systems.
2. Related Art
In computer file systems for storing and retrieving information, it is sometimes advantageous to duplicate all or part of the file system. For example, one purpose for duplicating a file system is to maintain a backup copy of the file system to protect against lost information. Another purpose for duplicating a file system is to provide replicas of the data in that file system available at multiple servers, to be able to share load incurred in accessing that data.
One problem in the known art is that known techniques for duplicating data in a file system either are relatively awkward and slow (such as duplication to tape), or are relatively expensive (such as duplication to an additional set of disk drives). For example, known techniques for duplication to tape rely on logical operations of the file system and the logical format of the file system. Being relatively cumbersome and slow discourages frequent use, resulting in backup copies that are relatively stale. When data is lost, the most recent backup copy might then be a day old, or several days old, severely reducing the value of the backup copy.
Similarly, known techniques for duplication to an additional set of disk drives rely on the physical format of the file system as stored on the original set of disk drives. These known techniques use an additional set of disk drives for duplication of the entire file system. Being relatively expensive discourages use, particularly for large file systems. Also, relying on the physical format of the file system complicates operations for restoring backup data and for performing incremental backup.
Accordingly, it would be desirable to provide a method and system for duplicating all or part of a file system, which can operate with any type of storage medium without either relative complexity or expense, and which can provide all the known functions for data backup and restore. This advantage is achieved in an embodiment of the invention in which consistent copies of the file system are maintained, so those consistent snapshots can be transferred at a storage block level using the file server's own block level operations.
The invention provides a method and system for duplicating all or part of a file system while maintaining consistent copies of the file system. The file server maintains a set of snapshots, each indicating a set of storage blocks making up a consistent copy of the file system as it was at a known time. Each snapshot can be used for a purpose other than maintaining the coherency of the file system, such as duplicating or transferring a backup copy of the file system to a destination storage medium. In a preferred embodiment, the snapshots can be manipulated to identify sets of storage blocks in the file system for incremental backup or copying, or to provide a file system backup that is both complete and relatively inexpensive.
In the following description, a preferred embodiment of the invention is described with regard to preferred process steps and data structures. However, those skilled in the art would recognize, after perusal of this application, that embodiments of the invention may be implemented using one or more general purpose processors (or special purpose processors adapted to the particular process steps and data structures) operating under program control, and that implementation of the preferred process steps and data structures described herein using such equipment would not require undue experimentation or further invention.
Inventions described herein can be used in conjunction with inventions described in the following applications:
Each of these applications is hereby incorporated by reference as if fully set forth herein. They are collectively referred to as the “WAFL Disclosures.”
File Servers and File System Image Transfer
A system 100 for file system image transfer includes a file server 110 and a destination file system 120.
The file server 110 includes a processor 111, a set of program and data memory 112, and mass storage 113, and preferably is a file server like one described in the WAFL Disclosures. In a preferred embodiment, the mass storage 113 includes a RAID storage subsystem and stores data for file system 114.
The destination file system 120 includes mass storage, such as a flash memory, a magnetic or optical disk drive, a tape drive, or other storage device. In a preferred embodiment, the destination file system 120 includes a RAID storage subsystem. The destination file system 120 can be coupled directly or indirectly to the file server 110 using a communication path 130.
In a first preferred embodiment, the destination file system 120 is coupled to the file server 110 and controlled by the processor 111 similarly to the mass storage 113. In this first preferred embodiment, the communication path 130 includes an internal bus for the file server 110, such as an I/O bus, a mezzanine bus, or other system bus.
In a second preferred embodiment, the destination file system 120 is included in a second file server 140. The second file server 140, similar to the first file server 110, includes a processor, a set of program and data memory, and mass storage that serves as the destination file system 120 with regard to the first file server 110. The second file server preferably is a file server like one described in the WAFL Disclosures. In this second preferred embodiment, the communication path 130 includes a network path between the first file server 110 and the second file server 140, such as a direct communication link, a LAN (local area network), a WAN (wide area network), a NUMA network, or another interconnect.
In a third preferred embodiment, the communication path 130 includes an intermediate storage medium, such as a tape, and the destination file system 120 can be either the first file server 110 itself or a second file server 140. As shown below, when the file server 110 selects a set of storage blocks for transfer to the destination file system 120, that set of storage blocks can be transferred by storing them onto the intermediate storage medium. At a later time, retrieving that set of storage blocks from the intermediate storage medium completes the transfer.
It is an aspect of the invention that there are no particular restrictions on the communication path 130. For example, a first part of the communication path 130 can include a relatively high-speed transfer link, while a second part of the communication path 130 can include an intermediate storage medium.
It is a further aspect of the invention that the destination file system 120 can be included in the first file server 110, in a second file server 140, or distributed among a plurality of file servers. Transfer of storage blocks from the first file server 110 to the destination file system 120 is thus completely general, and includes the possibility of a wide variety of different file system operations:
In alternative embodiments described herein, the second file server 140 can have a second destination file system. That second destination file system can be included within the second file server 140, or can be included within a third file server similar to the first file server 110 or the second file server 140.
More generally, each nth file server can have a destination file system, either included within the nth file server, or included within an n+1st file server. The set of file servers can thus form a directed graph, preferably a tree with the first file server 110 as the root of that tree.
File System Storage Blocks
As described in the WAFL Disclosures, a file system 114 on the file server 110 (and in general, on the nth file server), includes a set of storage blocks 115, each of which is stored either in the memory 112 or on the mass storage 113. The file system 114 includes a current block map, which records which storage blocks 115 are part of the file system 114 and which storage blocks 115 are free.
As described in the WAFL Disclosures, the file system on the mass storage 113 is at all times consistent. Thus, the storage blocks 115 included in the file system at all times comprise a consistent file system 114.
As used herein, the term “consistent,” referring to a file system (or to storage blocks in a file system), means a set of storage blocks for that file system that includes all blocks required for the data and file structure of that file system. Thus, a consistent file system stands on its own and can be used to identify a state of the file system at some point in time that is both complete and self-consistent.
As described in the WAFL Disclosures, when changes to the file system 114 are committed to the mass storage 113, the block map is altered to show those storage blocks 115 that are part of the committed file system 114. In a preferred embodiment, the file server 110 updates the file system frequently, such as about once each 10 seconds.
Snapshots
As used herein, a “snapshot” is a set of storage blocks, the member storage blocks forming a consistent file system, disposed using a data structure that allows for efficient set management. The efficient set management can include time efficiency for set operations (such as logical sum, logical difference, membership, add member, remove member). For example, the time efficiency can include O(n) time or less for n storage blocks. The efficient set management can also include space efficiency for enumerating the set (such as association with physical location on mass storage or inverting the membership function). The space efficiency can mean about 4 bytes or less per 4 K storage block of disk space, a ratio about 1000:1 better than duplicating the storage space.
As described herein, the data structure for the snapshot is stored in the file system so there is no need to traverse the file system tree to recover it. In a preferred embodiment, each snapshot is stored as a file system object, such as a blockmap. The blockmap includes a bit plane having one bit for each storage block, other than bits used to identify if the storage block is in the active file system.
Moreover, when the file system is backed-up, restored, or otherwise copied or transferred, the blockmap within the file system is as part of the same operation itself also backed-up, restored, or otherwise copied or transferred. Thus, operations on the file system inherently include preserving snapshots.
Any particular snapshot can be transferred by any communication technique, including
A collection 200 of snapshots 210 includes one bit plane for each snapshot 210. Each bit plane indicates a set of selected storage blocks 115. In the figure, each column indicates one bit plane (that is, one snapshot 210), and each row indicates one storage block 115 (that is, the history of that storage block 115 being included in or excluded from successive snapshots 210). At the intersection of each column and each row there is a bit 211 indicating whether that particular storage block 115 is included in that particular snapshot 210.
Each snapshot 210 comprises a collection of selected storage blocks 115 from the file system 114 that formed all or part of the (consistent) file system 114 at some point in time. A snapshot 210 can be created based on the block map at any time by copying the bits from the block map indicating which storage blocks 115 are part of the file system 114 into the corresponding bits 211 for the snapshot 210.
Differences between the snapshots 210 and the (active) file system 114 include the following:
At selected times, the file server 110 creates a new bit plane, based on the block map, to create a new snapshot 210. As described herein, snapshots 210 are used for backup and mirroring of the file system 114, so in preferred embodiments, new snapshots 210 are created at periodic times, such as once per hour, day, week, month, or as otherwise directed by an operator of the file server 110.
Storage Images and Image Streams
As used herein a “storage image” includes an indicator of a set of storage blocks selected in response to one or more snapshots. The technique for selection can include logical operations on sets (such as pairs) of snapshots. In a preferred embodiment, these logical operations can include logical sum and logical difference.
As used herein, an “image stream” includes a sequence of storage blocks from a storage image. A set of associated block locations for those storage blocks from the storage image can be identified in the image stream either explicitly or implicitly. For a first example, the set of associated block locations can be identified explicitly by including volume block numbers within the image stream. For a second example, the set of associated block locations can be identified implicitly by the order in which the storage blocks from the storage image are positioned or transferred within the image stream.
The sequence of storage blocks within the image stream can be optimized for a file system operation. For example, the sequence of storage blocks within the image stream can be optimized for a backup or restore file system operation.
In a preferred embodiment, the sequence of storage blocks is optimized so that copying of an image stream and transfer of that image stream from one file server to another is optimized. In particular, the sequence of storage blocks is selected so that storage blocks identified in the image stream can be, as much as possible, copied in parallel from a plurality of disks in a RAID file storage system, so as to maximize the transfer bandwidth from the first file server.
A storage image 220 comprises a set of storage blocks 115 to be copied from the file system 114 to the destination file system 120.
The storage blocks 115 in the storage image 220 are selected so that when copied, they can be combined to form a new consistent file system 114 on the destination file system 120. In various preferred embodiments, the storage image 220 that is copied can be combined with storage blocks 115 from other storage images 220 (which were transferred at earlier times).
As shown herein, the file server 110 creates each storage image 220 in response to one or more snapshots 210.
An image stream 230 comprises a sequence of storage blocks 115 from a storage image 220. When the storage image 220 is copied from the file system 114, the storage blocks 115 are ordered into the image stream 230 and tagged with block location information. When the image stream 230 is received at the destination file system 120, the storage blocks 115 in the image stream 230 are copied onto the destination file system 120 in response to the block location information.
Image Addition and Subtraction
The system 100 manipulates the bits 211 in a selected set of storage images 220 to select sets of storage blocks 115, and thus form a new storage image 220.
For example, the following different types of manipulation are possible:
The logical sum of two storage images 220 A+B comprises a storage image 220 that includes storage blocks 115 in either of the two original storage images 220. Using the logical sum, the system 100 can determine not just a single past state of the file system 114, but also a history of past states of that file system 114 that were recorded as snapshots 210.
The logical difference of two selected storage images 220 A−B comprises just those storage blocks that are included in the storage image 220 A but not in the storage image 220 B. (To preserve integrity of incremental storage images, the subtrahend storage image 220 B is always a snapshot 210.) A logical difference is useful for determining a storage image 220 having a set of storage blocks forming an incremental image, which can be used in combination with full images.
In alternative embodiments, other and further types of manipulation may also be useful. For example, it may be useful to determine a logical intersection of snapshots 210, so as to determine which storage blocks 115 were not changed between those snapshots 210.
In further alternative embodiments, the system 100 may also use the bits 211 from each snapshot 210 for other purposes, such as to perform other operations on the storage blocks 115 represented by those bits 211.
Incremental Storage Images
As used herein, an “incremental storage image” is a logical difference between a first storage image and a second storage image.
As used herein, in the logical difference A−B, the storage image 220 A is called the “top” storage image 220, and the storage image 220 B is called the “base” storage image 220.
When the base storage image 220 B comprises a full set F of storage blocks 115 in a consistent file system 114, the logical difference A−B includes those incremental changes to the file system 114 between the base storage image 220 B and the top storage image 220 A.
Each incremental storage image 220 has a top storage image 220 and a base storage image 220. Incremental storage images 220 can be chained together when there is a sequence of storage images 220 Ci where a base storage image 220 for each Ci is a top storage image 220 for a next Ci+1.
Examples of Incremental Images
For a first example, the system 100 can make a snapshot 210 each day, and form a level-0 storage image 220 in response to the logical sum of daily snapshots 210.
The June3.level0 storage image 220 includes all storage blocks 115 in the daily snapshots 210 June3, June2, and June1. Accordingly, the June3.level0 storage image 220 includes all storage blocks 115 in a consistent file system 114 (as well as possibly other storage blocks 115 that are unnecessary for the consistent file system 114 active at the time of the June3 snapshot 210).
In the first example, the system 100 can form an (incremental) level-1 storage image 220 in response to the logical sum of daily snapshots 210 and the logical difference with a single snapshot 210.
It is not required to subtract the June2 and June1 snapshots 210 when forming the June5.level1 storage image 220. All storage blocks 115 that the June5 snapshot 210 and the June4 snapshot 210 have in common with either the June2 snapshot 210 or the June1 snapshot 210, they will necessarily have in common with the June3 snapshot 210. This is because any storage block 115 that was part of the file system 114 on June2 or June1, and is still part of the file system 114 on June5 or June4, must have also been part of the file system 114 on June3.
In the first example, the system 100 can form an (incremental) level-2 storage image 220 in response to the logical sum of daily snapshots 210 and the logical difference with a single snapshot 210 from the time of the level-1 base storage image 220.
In the first example, the storage images 220 June3.level0, June5.level1, and June7.level2 collectively include all storage blocks 115 needed to construct a full set F of storage blocks 115 in a consistent file system 114.
For a second example, the system 100 can form a different (incremental) level-1 storage image 220 in response to the logical sum of daily snapshots 210 and the logical difference with a single snapshot 210 from the time of the level-0 storage image 220.
Similar to the first example, the storage images 220 June3.level0 and June9.level1 collectively include all storage blocks 115 needed to construct a full set F of storage blocks 115 in a consistent file system 114. There is no particular requirement that the June9.level1 storage image 220 be related to or used in conjunction with the June7.level2 storage image 220 in any way.
File System Image Transfer Techniques
To perform one of these copying operations, the file server 110 includes operating system or application software for controlling the processor 111, and data paths for transferring data from the mass storage 113 to the communication path 130 to the destination file system 120. However, the selected storage blocks 115 in the image stream 230 are copied from the file system 114 to the corresponding destination file system 120 without logical file system processing by the file system 114 on the first file server 110.
In a preferred embodiment, the system 100 is disposed to perform one of at least four such copying operations:
The image stream 230 comprises a sequence of storage blocks 115 from a storage image 220. As in nearly all the image transfer techniques described herein, that storage image 220 can represent a full image or an incremental image:
The image stream 230 can be copied from the file server 110 to the destination file system 120 using any communication technique. This could include a direct communication link, a LAN (local area network), a WAN (wide area network), transfer via tape, or a combination thereof. When the image stream 230 is transferred using a network, the storage blocks 115 are encapsulated in messages using a network communication protocol known to the file server 110 and to the destination file system 120. In some network communication protocols, there can be additional messages between the file server 110 and to the destination file system 120 to ensure the receipt of a complete and correct copy of the image stream 230.
The destination file system 120 receives the image stream 230 and identifies the storage blocks 115 from the mass storage 113 to be recorded on the destination file system 120.
When the storage blocks 115 represent a complete and consistent file system 114, the destination file system 120 records that file system 114 without logical change. The destination file system 120 can make that file system 114 available for read-only access by local processes. In alternative embodiments, the destination file system 120 may make that file system 114 available for access by local processes, without making changes by those local processes available to the file server 110 that was the source of the file system 114.
When the storage blocks 115 represent an incremental set of changes to a consistent file system 114, the destination file system 120 combines those changes with that file system 114 form a new consistent file system 114. The destination file system 120 can make that new file system 114 available for read-only access by local processes.
In embodiments where the destination file system 120 makes the transferred file system 114 available for access by local processes, changes to the file system 114 at the destination file system 120 can be flushed when a subsequent incremental set of changes is received by the destination file system 120.
All aspects of the file system 114 are included in the image stream 230, including file data, file structure hierarchy, and file attributes. File attributes preferably include NFS attributes, CIFS attributes, and those snapshots 210 already maintained in the file system 114.
Disk Copying. In a first preferred embodiment of volume copying (herein called “disk copying”), the destination file system 120 can include a disk drive or other similar accessible storage device. The system 100 can copy the storage blocks 115 from the mass storage 113 to that accessible storage device, providing a copy of the file system 114 that can be inspected at the current time.
When performing disk copying, the system 100 creates an image stream 230, and copies the selected storage blocks 115 from the mass storage 113 at the file server 110 to corresponding locations on the destination file system 120. Because the mass storage 113 at the file server 110 and the destination file system 120 are both disk drives, copying to corresponding locations should be simple and effective.
It is possible that locations of storage blocks 115 at the mass storage 113 at the file server 110 and at the destination file system 120 do not readily coincide, such as if the mass storage 113 and the destination file system 120 have different sizes or formatting. In those cases, the destination file system 120 can reorder the storage blocks 115 in the image stream 230, similar to the “Tape Backup” embodiment described herein.
Tape Backup. In a second preferred embodiment of volume copying (herein called “tape backup”), the destination file system 120 can include a tape device or other similar long-term storage device. The system 100 can copy storage blocks 115 from the mass storage 113 to that long-term storage device, providing a backup copy of the file system 114 that can be restored at a later time.
When performing tape backup, the system 100 creates an image stream 230, and copies the selected storage blocks 115 from the mass storage 113 at the file server 110 to a sequence of new locations on the destination file system 120. Because the destination file system 120 includes one or more tape drives, the system 100 creates and transmits a table indicating which locations on the mass storage 113 correspond to which other locations on the destination file system 120.
Similar to transfer of an image stream 230 using a network communication protocol, the destination file system 120 can add additional information to the image stream 230 for recording on tape. This additional information can include tape headers and tape gaps, blocking or clustering of storage blocks 115 for recording on tape, and reformatting of storage blocks 115 for recording on tape.
File Backup. In a third preferred embodiment of volume copying (herein called “file backup”), the image stream 230 can be copied to a new file within a file system 114, either at the file server 110 or at a file system 114 on the destination file system 120.
Similar to tape backup, the destination file system 120 can add additional information to the image stream 230 for recording in an file. This additional information can include file metadata useful for the file system 114 to locate storage blocks 115 within the file.
In a preferred embodiment, the mirror copy of the file system 114 can be used for takeover by a second file server 140 from the first file server 110, such as for example if the first file server 110 fails.
When performing volume mirroring, the system 100 first transfers an image stream 230 representing a complete file system 114 from the file server 110 to the destination file system 120. The system 100 then periodically transfers image streams 230 representing incremental changes to that file system 114 from the file server 110 to the destination file system 120. The destination file system 120 is able to reconstruct a most recent form of the consistent file system 114 from the initial full image stream 230 and the sequence of incremental image streams 230.
It is possible to perform volume mirroring using volume copying of a full storage image 230 and a sequence of incremental storage images 230. However, determining the storage blocks 115 to be included in an incremental storage images 230 can take substantial time for a relatively large file system 114, if done by logical subtraction.
As used herein, a “mark-on-allocate storage image” is a subset of a snapshot, the member storage blocks being those that have been added to a snapshot that originally formed a consistent file system.
In a preferred embodiment, rather than using logical subtraction, as described above, at the time the incremental storage images 230 is about to be transferred, the file server 110 maintains a separate “mark-on-allocate” storage image 230. The mark-on-allocate storage image 230 is constructed by setting a bit for each storage block 115, as it is added to the consistent file system 114. The mark-on-allocate storage image 230 does not need to be stored on the mass storage 113, included in the block map, or otherwise backed-up; it can be reconstructed from other storage images 230 already at the file server 110.
When an incremental storage image 230 is transferred, a first mark-on-allocate storage image 230 is used to determine which storage blocks 115 to include in the storage image 230 for transfer. A second mark-on-allocate storage image 230 is used to record changes to the file system 114 while the transfer is performed. After the transfer is performed, the first and second mark-on-allocate storage images 230 exchange roles.
Full Mirroring. In a first preferred embodiment of volume mirroring (herein called “full mirroring”), the destination file system 120 includes a disk drive or other similar accessible storage device.
Upon the initial transfer of the full storage image 230 from the file server 110, the destination file system 120 creates a copy of the consistent file system 114. Upon the sequential transfer of each incremental storage image 230 from the file server 110, the destination file system 120 updates its copy of the consistent file system 114. The destination file system 120 thus maintains its copy of the file system 114 nearly up to date, and can be inspected at any time.
When performing full mirroring, similar to disk copying, the system 100 creates an image stream 230, and copies the selected storage blocks 115 from the mass storage 113 at the file server 110 to corresponding locations on the destination file system 120.
Incremental Mirroring. In a second preferred embodiment of volume mirroring (herein called “incremental mirroring”), the destination file system 120 can include both (1) a tape device or other relatively slow storage device, and (2) a disk drive or other relatively fast storage device.
As used herein, an “incremental mirror” of a first file system is a base storage image from the first file system, and at least one incremental storage image from the first file system, on two storage media of substantially different types. Thus, a complete copy of the first file system can be reconstructed from the two or more objects.
Upon the initial transfer of the full storage image 230 from the file server 110, the destination file system 120 copies a complete set of storage blocks 115 from the mass storage 113 to that relatively slow storage device. Upon the sequential transfer of each incremental storage image 230 from the file server 110, the destination file system 120 copies incremental sets of storage blocks 115 from the mass storage 113 to the relatively fast storage device. Thus, the full set of storage blocks 115 plus the incremental sets of storage blocks 115 collectively represent an up-to-date file system 114 but do not require an entire duplicate disk drive.
When performing incremental mirroring, for the base storage image 230, the system 100 creates an image stream 230, and copies the selected storage blocks 115 from the mass storage 113 at the file server 110 to a set of new locations on the relatively slow storage device. The system 100 writes the image stream 230, including storage block location information, to the destination file system 120. In a preferred embodiment, the system 100 uses a tape as an intermediate destination storage medium, so that the base storage image 230 can be stored for a substantial period of time without having to occupy disk space.
For each incremental storage image 230, the system 100 creates a new image stream 230, and copies the selected storage blocks 115 from the mass storage 113 at the file server 110 to a set of new locations on the accessible storage device. Incremental storage images 230 are created continuously and automatically at periodic times that are relatively close together.
The incremental storage images 230 are received at the destination file system 120, which unpacks them and records the copied storage blocks 115 in an incremental mirror data structure. As each new incremental storage image 230 is copied, copied storage blocks 115 overwrite the equivalent storage blocks 115 from earlier incremental storage images 230. In a preferred embodiment, the incremental mirror data structure includes a sparse file structure including only those storage blocks 115 that are different from the base storage image 230.
In a preferred embodiment, the incremental storage images 230 are transmitted to the destination file system 120 with a data structure indicating a set of storage blocks 115 that were deallocated (that is, removed) from the file system on the file server 110. Thus, the images are mark-on-deallocate images of the storage blocks. In response to this data structure, the destination file system 120 removes those indicated storage blocks 115 from its incremental mirror data structure. This allows the destination file system 120 to maintain the incremental mirror data structure at a size no larger than approximately the actual differences between a current file system at the file server 110 and the base storage image 230 from the file server 110.
Consistency Points. When performing either full mirroring or incremental mirroring, it can occur that the transfer of a storage image 230 takes longer than the time needed for the file server 110 to update its consistent file system 114 from a first consistency point to a second consistency point. Consistency points are described in further detail in the WAFL Disclosures.
In a preferred embodiment, the file server 110 does not attempt to create a storage image 230 and to transfer storage blocks 115 for every consistency point. Instead, after a transfer of a storage image 230, the file server 110 determines the most recent consistency point (or alternatively, determines the next consistency point) as the effective next consistency point. The file server 110 uses the effective next consistency point to determine any incremental storage image 230 for a next transfer.
The file server 110 maintains a set of selected master snapshots 210. A master snapshot 210 is a snapshot 210 whose existence can be known by the destination file system 120, so that the destination file system 120 can be updated with reference to the file system 114 maintained at the file server 110. In a preferred embodiment, each master snapshot 210 is designated by an operator command at the file server 110, and is retained for a relatively long time, such as several months or a year.
In a preferred embodiment, at a minimum, each master snapshot 210 is retained until all known destination file systems 120 have been updated past that master snapshot 210. A master snapshot 210 can be designated as a shadow snapshot 210, but in such cases destination file systems 120 are taken off-line during update of the master shadow snapshot 210. That is, destination file systems 120 wait for completion of the update of that master shadow snapshot 210 before they are allowed to request an update from that master shadow snapshot 210.
The destination file system 120 generates a message (such as upon command of an operator or in response to initialization or self-test) that it transmits to the file server 110, requesting an update of the file system 114. The message includes a newest master snapshot 210 to which the destination file system 120 has most recently synchronized. The message can also indicate that there is no such newest master snapshot 210.
The file server 110 determines any incremental changes that have occurred to the file system 114 from the newest master snapshot 210 at the destination file system 120 to the newest master snapshot 210 at the file server 110. In response to this determination, the file server 110 determines a storage image 230 including storage blocks 115 for transfer to the destination file system 120, so as to update the copy of the file system 114 at the destination file system 120.
If there is no such newest master snapshot 210, the system 100 performs volume copying for a full copy of the file system 114 represented by the newest master snapshot 210 at the file server 110. Similarly, if the oldest master snapshot 210 at the file server 110 is newer than the newest master snapshot 210 at the destination file system 120, the system 100 performs volume copying for a full copy of the file system 114.
After volume replication, the destination file system 120 updates its most recent master snapshot 210 to be the most recent master snapshot 210 from the file server 110.
Volume replication is well suited to uploading upgrades to a publicly accessible database, document, or web site. Those destination file systems 120, such as mirror sites, can then obtain the uploaded upgrades periodically, when they are initialized, or upon operator command at the destination file system 120. If the destination file systems 120 are not in communication with the file server 110 for a substantial period of time, when communication is re-established, the destination file systems 120 can perform volume replication with the file server 110 to obtain a substantially up-to-date copy of the file system 114.
In a first preferred embodiment of volume replication (herein called “simple replication”), the destination file system 120 communicates directly (using a direct communication link, a LAN, a WAN, or a combination thereof) with the file server 110.
In a second preferred embodiment of volume replication (herein called “multiple replication”), a first destination file system communicates directly (using a direct communication link, a LAN, a WAN, or a combination thereof) with a second destination file system. The second destination file system acts like the file server 110 to perform simple replication for the first destination file system.
A sequence of such destination file systems ultimately terminates in a destination file system that communicates directly with the file server 110 and performs simple replication. The sequence of destination file systems thus forms a replication hierarchy, such as in a directed graph or a tree of file severs 110.
In alternative embodiments, the system 100 can also perform one or more combinations of these techniques.
In a preferred embodiment, the file server 110 can maintain a set of pointers to snapshots 210, naming those snapshots 210 and having the property that references to the pointers are functionally equivalent to references to the snapshots 210 themselves. For example, one of the pointers can have a name such as “master,” so that the newest master snapshot 210 at the file server 110 can be changed simultaneously for all destination file systems. Thus, all destination file systems can synchronize to the same master snapshot 210.
Shadow Snapshots
The system 100 includes the possibility of designating selected snapshots 210 as “shadow” snapshots 210.
As used herein, a “shadow snapshot” is a subset of a snapshot, the member storage blocks no longer forming a consistent file system. Thus, at one time the member storage blocks of the snapshot did form a consistent file system, but at least some of the member storage blocks have been removed from that snapshot.
A shadow snapshot 210 has the property that the file server 110 can reuse the storage blocks 115 in the snapshot 210 whenever needed. A shadow snapshot 210 can be used as the base of an incremental storage image 230. In such cases, storage blocks 115 might have been removed from the shadow snapshot 210 due to reuse by the file system 110. It thus might occur that the incremental storage image 230 resulting from logically subtraction using the shadow snapshot 210 includes storage blocks 115 that are not strictly necessary (having been removed from the shadow snapshot 210 they are not subtracted out). However, all storage blocks 115 necessary for the incremental storage image 230 will still be included.
For regular snapshots 210, the file server 110 does not reuse the storage blocks 115 in the snapshot 210 until the snapshot 210 is released. Even if the storage blocks 115 in the snapshot 210 are no longer part of the active file system, the file server 110 retains them without change. Until released, each regular snapshot 210 preserves a consistent file system 114 that can be accessed at a later time.
However, for shadow snapshots 210, the file server 110 can reuse the storage blocks 115 in the shadow snapshot 210. When one of those storage blocks 115 is reused, the file server 110 clears the bit in the shadow snapshot 210 for that storage block 115. Thus, each shadow snapshot 210 represents a set of storage blocks 115 from a consistent file system 114 that have not been changed in the active file system 114 since the shadow snapshot 210 was made. Because storage blocks 115 can be reused, the shadow snapshot 210 does not retain the property of representing a consistent file system 114. However, because the file server 110 can reuse those storage blocks 115, the shadow snapshot 210 does not cause any storage blocks 115 on the mass storage 113 to be permanently occupied.
Method of Operation
A method 300 is performed is performed by the file server 110 and the destination file system 120, and includes a set of flow points and process steps as described herein.
Generality of Operational Technique
In each of the file system image transfer techniques, the method 300 performs three operations:
As shown herein, each of these steps is quite general in its application.
In the first (selection) step, the storage image 220 selected can be a complete file system or can be a subset thereof. The subset can be an increment to the complete file system, such as those storage blocks that have been changed, or can be another type of subset. The storage image 220 can be selected a single time, such as for a backup operation, or repeatedly, such as for a mirroring operation. The storage image 220 can be selected in response to a process at a sending file server or at a receiving file server.
For example, as shown herein, the storage image 220 selected can be for a full backup or copying of an entire file system, or can be for incremental backup or incremental mirroring of a file system. The storage image 220 selected can be determined by a sending file server, or can be determined in response to a request by a receiving file server (or set of receiving file servers).
In the second (operational) step, the image stream 230 can be selected so as to optimize the operation. The image stream 230 can be selected and ordered to optimize transfer to different types of media, to optimize transfer rate, or to optimize reliability. In a preferred embodiment, the image stream 230 is optimized to maximize transfer rate from parallel disks in a RAID disk system.
In the third (reconstruction) step, the image stream 230 can be reconstructed into a complete file system, or can be reconstructed into an increment of a file system. The reconstruction step can be performed immediately or after a delay, can be performed in response to the process that initiated the selection step, or can be performed independently in response to other needs.
Selecting a Storage Image
In each of the file system image transfer techniques, the method 300 selects a storage image 220 to be transferred.
At a flow point 370, the file server 110 is ready to select a storage image 220 for transfer.
At a step 371, the file server 110 forms a logical sum LS of a set of storage images 220 A1+A2, thus LS=A1+A2. The logical sum LS can also include any plurality of storage images 220, such as A1+A2+A3+A4, thus for example LS=A1+A2+A3+A4.
At a step 372, the file server 110 determines if the transfer is a full transfer or an incremental transfer. If the transfer is incremental, the method 300 continues with the next step. If the transfer is a full transfer, the method 300 continues with the flow point 380.
At a step 373, the file server 110 forms a logical difference LD of the logical sum LS and a base storage image 220 B, thus LD=LS−B. The base storage image 220 B comprises a snapshot 210.
At a flow point 380, the file server 110 has selected a storage image 230 for transfer.
Volume Copying
At a flow point 310, the file server 110 is ready to perform a volume copying operation.
At a step 311, the file server 111 selects a storage image 220 for transfer, as described with regard to the flow point 370 through the flow point 380. If the volume copying operation is a full volume copy, the storage image 220 selected is for a full transfer. If the volume copying operation is an incremental volume copy, the storage image 220 selected is for an incremental transfer.
At a step 312, the file server 110 determines if the volume is to be copied to disk or to tape.
At a step 313, the file server 110 creates an image stream 230 for the selected storage image 220. In a preferred embodiment, the storage blocks 115 in the image stream 230 are ordered for transfer to disk. Each storage block 115 is associated with a VBN (virtual block number) for identification. The method 300 continues with the step 315.
At a step 314, the file server 110 performs the same functions as in the step 313, except that the storage blocks 115 in the image stream 230 are ordered for transfer to tape.
At a step 315, the file server 110 copies the image stream 230 to the destination file system 120 (disk or tape).
In a preferred embodiment, the file server 110 copies the image stream 230 to the destination file system 120 using a communication protocol known to both the file server 110 and the destination file system 120, such as TCP. As noted herein, the image stream 230 used with the communication protocol is similar to the image stream 230 used for tape backup, but can include additional messages or packets for acknowledgement or retransmission of data.
The destination file system 120 presents the image stream 230 directly to a restore element, which copies the image stream 230 onto the destination file system 120 target disk(s) as they were on the source disk(s). Because a consistent file system 114 is copied from the file server 110 to the destination file system 120, the storage blocks 115 in the image stream 230 can be used directly as a consistent file system 114 when they arrive at the destination file system 120.
The destination file system 120 might have to alter some inter-block pointers, responsive to the VBN of each storage block 115, if some or all of the target storage blocks 115 are recorded in different physical locations on disk from the source storage blocks 115.
The destination file system 120 records the image stream 230 directly onto tape, along with a set of block number information for each storage block 115. The destination file system 120 can later retrieve selected storage blocks 115 from tape and place them onto a disk file server 110. Because a consistent file system 114 is copied from the file server 110 to the destination file system 120, the storage blocks 115 in the image stream 230 can be restored directly to disk when later retrieved from tape at the destination file system 120.
The destination file system 120 might have to alter some inter-block pointers, responsive to the VBN of each storage block 115, if some or all of the target storage blocks 115 are retrieved from tape and recorded in different physical locations on disk from the source storage blocks 115. The destination file system 120 recorded this information in header data that it records onto tape.
At a flow point 320, the file server 110 has completed the volume copying operation.
Volume Mirroring
At a flow point 330, the file server 110 is ready to perform a volume mirroring operation.
At a step 331, the file server 110 performs a full volume copying operation, as described with regard to the flow point 310 through the flow point 320. The volume copying operation is performed for a full copy of the file system 114.
At a step 332, the file server 110 sets a mirroring timer for incremental update for the volume mirroring operation.
At a step 333, the mirroring timer is hit, and the file server 110 begins the incremental update for the volume mirroring operation.
At a step 334, the file server 110 performs an incremental volume copying operation, as described with regard to the flow point 310 through the flow point 320. The volume copying operation is performed for an incremental upgrade of the file system 114.
The incremental volume copying operation is performed with disk as the target destination file system 120.
At a step 335, the file server 110 copies the image stream 230 to the target destination file system 120. The method 300 returns to the step 332, at which step the file server 110 resets the mirroring timer, and the method 300 continues.
When the destination file system 120 receives the image stream 230, it records the storage blocks 115 in that image stream 230 similar to the process of volume copying, as described with regard to the step 315.
If the method 300 is halted (by an operator command or otherwise), the method 300 completes at the flow point 340.
At a flow point 340, the file server 110 has completed the volume mirroring operation.
Reintegration of Incremental Mirror
At a flow point 370, the file server 110 is ready to restore a file system from the base storage image 220 and the incremental mirror data structure.
At a step 371, the file server 110 reads the base storage image 220 into its file system.
At a step 372, the file server 110 reads the incremental mirror data structure into its file system and uses that data structure to update the base storage image 220.
At a step 373, the file server 110 remounts the file system that was updated using the incremental mirror data structure.
At a flow point 380, the file server 110 is ready to continue operations with the file system restored from the base storage image 220 and the incremental mirror data structure.
Volume Replication
At a flow point 350, the file server 110 is ready to perform a volume replication operation.
At a step 351, the destination file system 120 initiates the volume replication operation. The destination file system 120 sends an indicator of its newest master snapshot 210 to the file server 110, and requests the file server 110 to perform the volume replication operation.
At a step 352, the file server 110 determines if it needs to perform a volume replication operation to synchronize with a second file server 140. In this case, the second file server 140 takes the role of the destination file system 120, and initiates the volume replication operation with regard to the first file server 110.
At a step 353, the file server 110 determines its newest master snapshot 210, and its master snapshot 210 corresponding to the master snapshot 210 indicated by the destination file system 120.
At a step 354, the file server 110 performs an incremental volume copying operation, responsive to the incremental difference between the selected corresponding master snapshot 210, and the newest master snapshot 210 it has available. The method 300 proceeds with the flow point 360.
At a step 355, the file server 110 performs a full volume copying operation, responsive to the newest master snapshot 210 it has available. The method 300 proceeds with the flow point 360.
At a flow point 360, the file server 110 has completed the volume replication operation. The destination file system 120 updates its master snapshot 210 to correspond to the master snapshot 210 that was used to make the file system transfer from the file server 110.
Alternative Embodiments
Although preferred embodiments are disclosed herein, many variations are possible which remain within the concept, scope, and spirit of the invention, and these variations would become clear to those skilled in the art after perusal of this application.
This is a continuation-in-part of application Ser. No. 09/127,497, filed Jul. 31, 1998, now U.S. Pat. No. 6,604,118. This is also a continuation-in-part of application Ser. No. 09/153,094, filed Sep. 14, 1998, now U.S. Pat. No. 6,289,356, which is a continuation of application Ser. No. 09/108,022, filed Jun. 30, 1998 (now U.S. Pat. No. 5,963,962), which is a continuation of application Ser. No. 08/454,921, filed May 31, 1995 (now U.S. Pat. No. 5,819,292), which is a continuation of application Ser. No. 08/071,643, filed Jun. 3, 1993 (now abandoned).
Number | Name | Date | Kind |
---|---|---|---|
3813529 | Bartlett | May 1974 | A |
3893024 | Reins et al. | Jul 1975 | A |
4075691 | Davis et al. | Feb 1978 | A |
4075704 | O'Leary | Feb 1978 | A |
4156907 | Rawlings | May 1979 | A |
4333144 | Whiteside | Jun 1982 | A |
4351023 | Richer | Sep 1982 | A |
4377843 | Garringer | Mar 1983 | A |
4399503 | Hawley | Aug 1983 | A |
4456957 | Schieltz | Jun 1984 | A |
4459664 | Pottier | Jul 1984 | A |
4488231 | Yu et al. | Dec 1984 | A |
4494188 | Nakane | Jan 1985 | A |
4527232 | Bechtolsheim | Jul 1985 | A |
4550368 | Bechtolsheim | Oct 1985 | A |
4589067 | Porter et al. | May 1986 | A |
4620292 | Hagiwara | Oct 1986 | A |
4685125 | Zave | Aug 1987 | A |
4710868 | Cocke et al. | Dec 1987 | A |
4719569 | Ludemann | Jan 1988 | A |
4742447 | Duvall et al. | May 1988 | A |
4742450 | Duvall et al. | May 1988 | A |
4761737 | Duvall et al. | Aug 1988 | A |
4761785 | Clark et al. | Aug 1988 | A |
4766534 | DeBenedicts | Aug 1988 | A |
4780821 | Crossley | Oct 1988 | A |
4783730 | Fischer | Nov 1988 | A |
4803621 | Kelly | Feb 1989 | A |
4814971 | Thatte | Mar 1989 | A |
4819159 | Shipley et al. | Apr 1989 | A |
4825354 | Agrawal et al. | Apr 1989 | A |
4827411 | Arrowood | May 1989 | A |
4845609 | Lighthart et al. | Jul 1989 | A |
4875159 | Cary et al. | Oct 1989 | A |
4878167 | Kapulka et al. | Oct 1989 | A |
4887204 | Johnson et al. | Dec 1989 | A |
4897781 | Chang et al. | Jan 1990 | A |
4914583 | Weisshaar | Apr 1990 | A |
4937763 | Mott | Jun 1990 | A |
4965772 | Daniel et al. | Oct 1990 | A |
4969118 | Montoye et al. | Nov 1990 | A |
4984272 | McIlroy et al. | Jan 1991 | A |
5001628 | Johnson et al. | Mar 1991 | A |
5001712 | Slipett et al. | Mar 1991 | A |
5008786 | Thatte | Apr 1991 | A |
5018144 | Corr et al. | May 1991 | A |
5043871 | Nishigaki | Aug 1991 | A |
5043876 | Terry | Aug 1991 | A |
5049873 | Robins et al. | Sep 1991 | A |
5067099 | McCown et al. | Nov 1991 | A |
5088081 | Farr | Feb 1992 | A |
5107500 | Wakamoto | Apr 1992 | A |
5113442 | Moir | May 1992 | A |
5134619 | Henson et al. | Jul 1992 | A |
5144659 | Jones | Sep 1992 | A |
5146588 | Crater et al. | Sep 1992 | A |
5155835 | Belsan | Oct 1992 | A |
5163131 | Row et al. | Nov 1992 | A |
5163148 | Walls | Nov 1992 | A |
5182805 | Campbell | Jan 1993 | A |
5195100 | Katz et al. | Mar 1993 | A |
5202983 | Orita et al. | Apr 1993 | A |
5208813 | Stallmo | May 1993 | A |
5218695 | Noveck et al. | Jun 1993 | A |
5218696 | Baird et al. | Jun 1993 | A |
5222217 | Blount et al. | Jun 1993 | A |
5235601 | Stallmo et al. | Aug 1993 | A |
5251308 | Frank | Oct 1993 | A |
5255270 | Yanai et al. | Oct 1993 | A |
5261044 | Dev et al. | Nov 1993 | A |
5261051 | Masden et al. | Nov 1993 | A |
5274799 | Brant et al. | Dec 1993 | A |
5274807 | Hoshen et al. | Dec 1993 | A |
5276840 | Yu | Jan 1994 | A |
5276867 | Kenley et al. | Jan 1994 | A |
5278838 | Ng et al. | Jan 1994 | A |
5283830 | Hinsley et al. | Feb 1994 | A |
5297265 | Frank et al. | Mar 1994 | A |
5305326 | Solomon et al. | Apr 1994 | A |
5313626 | Jones et al. | May 1994 | A |
5313646 | Hendricks | May 1994 | A |
5313647 | Kaufman | May 1994 | A |
5315602 | Noya et al. | May 1994 | A |
5317731 | Dias et al. | May 1994 | A |
5319780 | Catino et al. | Jun 1994 | A |
5333305 | Neufeld | Jul 1994 | A |
5335235 | Arnott | Aug 1994 | A |
5355453 | Row et al. | Oct 1994 | A |
5357509 | Ohizumi | Oct 1994 | A |
5357612 | Alaiwan | Oct 1994 | A |
5369757 | Spiro et al. | Nov 1994 | A |
5377196 | Godlew et al. | Dec 1994 | A |
5379417 | Lui et al. | Jan 1995 | A |
5430729 | Rahnema | Jul 1995 | A |
5448718 | Cohn et al. | Sep 1995 | A |
5452444 | Solomon et al. | Sep 1995 | A |
5454095 | Kraemer et al. | Sep 1995 | A |
5454099 | Myers et al. | Sep 1995 | A |
5463642 | Gibbs et al. | Oct 1995 | A |
5485455 | Dobbins et al. | Jan 1996 | A |
5490248 | Dan et al. | Feb 1996 | A |
5497343 | Rarick | Mar 1996 | A |
5502836 | Hale et al. | Mar 1996 | A |
5504883 | Coverston et al. | Apr 1996 | A |
5519844 | Stallmo | May 1996 | A |
5535375 | Eshel et al. | Jul 1996 | A |
5555244 | Gupta et al. | Sep 1996 | A |
5572711 | Hirsch et al. | Nov 1996 | A |
5574843 | Gerlach, Jr. | Nov 1996 | A |
5604862 | Midgely et al. | Feb 1997 | A |
5604868 | Komine et al. | Feb 1997 | A |
5617568 | Ault et al. | Apr 1997 | A |
5621663 | Skagerling | Apr 1997 | A |
5623666 | Pike et al. | Apr 1997 | A |
5627842 | Brown et al. | May 1997 | A |
5628005 | Hurvig | May 1997 | A |
5630060 | Tang et al. | May 1997 | A |
5634010 | Ciscon et al. | May 1997 | A |
5642501 | Doshi et al. | Jun 1997 | A |
5644718 | Belove et al. | Jul 1997 | A |
5649152 | Ohran et al. | Jul 1997 | A |
5649196 | Woodhill et al. | Jul 1997 | A |
5666353 | Klausmeiser | Sep 1997 | A |
5668958 | Bendert et al. | Sep 1997 | A |
5673265 | Gupta et al. | Sep 1997 | A |
5675726 | Hohenstein et al. | Oct 1997 | A |
5675782 | Montague et al. | Oct 1997 | A |
5678006 | Valizadeh | Oct 1997 | A |
5678007 | Hurvig | Oct 1997 | A |
5689701 | Ault et al. | Nov 1997 | A |
5694163 | Harrison | Dec 1997 | A |
5696486 | Poliquin et al. | Dec 1997 | A |
5701480 | Raz | Dec 1997 | A |
5720029 | Kern et al. | Feb 1998 | A |
5721916 | Pardikar | Feb 1998 | A |
5737523 | Callaghan et al. | Apr 1998 | A |
5737744 | Callison et al. | Apr 1998 | A |
5740367 | Spilo | Apr 1998 | A |
5742752 | DeKoning | Apr 1998 | A |
5754851 | Wissner | May 1998 | A |
5758347 | Lo et al. | May 1998 | A |
5761407 | Benson et al. | Jun 1998 | A |
5761669 | Montague et al. | Jun 1998 | A |
5819292 | Hitz et al. | Oct 1998 | A |
5819310 | Vishlitzky | Oct 1998 | A |
5825877 | Dan et al. | Oct 1998 | A |
5826102 | Escobar et al. | Oct 1998 | A |
5828839 | Moncreiff | Oct 1998 | A |
5828876 | Fish et al. | Oct 1998 | A |
5835953 | Ohran | Nov 1998 | A |
5854893 | Ludwig et al. | Dec 1998 | A |
5854903 | Morrison et al. | Dec 1998 | A |
5856981 | Voelker | Jan 1999 | A |
5857207 | Lo et al. | Jan 1999 | A |
5870764 | Lo et al. | Feb 1999 | A |
5873101 | Klein | Feb 1999 | A |
5875444 | Hughes | Feb 1999 | A |
5876278 | Cheng | Mar 1999 | A |
5890959 | Pettit et al. | Apr 1999 | A |
5907672 | Matze et al. | May 1999 | A |
5915087 | Hammond et al. | Jun 1999 | A |
5931935 | Calbrera et al. | Aug 1999 | A |
5948110 | Hitz et al. | Sep 1999 | A |
5950225 | Kleiman | Sep 1999 | A |
5956491 | Marks | Sep 1999 | A |
5956712 | Bennett et al. | Sep 1999 | A |
5957612 | Bradley | Sep 1999 | A |
5963962 | Hitz et al. | Oct 1999 | A |
5983364 | Bortcosh et al. | Nov 1999 | A |
5996086 | Delaney et al. | Nov 1999 | A |
5996106 | Seyyedy | Nov 1999 | A |
5999943 | Nori et al. | Dec 1999 | A |
6000039 | Tanaka et al. | Dec 1999 | A |
6026402 | Vossen et al. | Feb 2000 | A |
6044214 | Kimura et al. | Mar 2000 | A |
6067541 | Raju et al. | May 2000 | A |
6070008 | Korenshtein | May 2000 | A |
6073089 | Baker et al. | Jun 2000 | A |
6073255 | Nouri et al. | Jun 2000 | A |
6076148 | Kedem | Jun 2000 | A |
6078932 | Haye et al. | Jun 2000 | A |
6085234 | Pitts et al. | Jul 2000 | A |
6088694 | Burns et al. | Jul 2000 | A |
6101507 | Cane et al. | Aug 2000 | A |
6101585 | Brown | Aug 2000 | A |
H1860 | Asthana et al. | Sep 2000 | H |
6119244 | Schoenthal | Sep 2000 | A |
6145121 | Levy et al. | Nov 2000 | A |
6205450 | Kanome | Mar 2001 | B1 |
6223306 | Silva et al. | Apr 2001 | B1 |
6289356 | Hitz et al. | Sep 2001 | B1 |
6574591 | Kleiman et al. | Jun 2003 | B1 |
6721764 | Hitz et al. | Apr 2004 | B2 |
6892211 | Hitz et al. | May 2005 | B2 |
20010044807 | Kleiman et al. | Nov 2001 | A1 |
20020049718 | Kleiman et al. | Apr 2002 | A1 |
20020059172 | Muhlestein | May 2002 | A1 |
20020091670 | Hitz et al. | Jul 2002 | A1 |
20030217082 | Kleiman | Nov 2003 | A1 |
20040260673 | Hitz et al. | Dec 2004 | A1 |
Number | Date | Country |
---|---|---|
2 165 912 | May 2004 | CA |
694 25 658 | Aug 2000 | DE |
0308506 | Mar 1987 | EP |
0306244 | Mar 1989 | EP |
0308056 | Mar 1989 | EP |
0308506 | Mar 1989 | EP |
0321723 | Jun 1989 | EP |
0359384 | Jun 1989 | EP |
359384 | Mar 1990 | EP |
0359384 | Mar 1990 | EP |
0359384 | Mar 1990 | EP |
0410630 | Jan 1991 | EP |
0410630 | Jan 1991 | EP |
453193 | Oct 1991 | EP |
0453193 | Oct 1991 | EP |
0453193 | Oct 1991 | EP |
0462917 | Dec 1991 | EP |
0462917 | Dec 1991 | EP |
0462917 | Dec 1991 | EP |
0 477 039 | Mar 1992 | EP |
0477039 | Mar 1992 | EP |
0492808 | Jul 1992 | EP |
0492808 | Jul 1992 | EP |
0492808 | Jul 1992 | EP |
0497067 | Aug 1992 | EP |
0559488 | Sep 1992 | EP |
0559488 | Sep 1992 | EP |
0559488 | Sep 1992 | EP |
0537098 | Apr 1993 | EP |
0537198 | Apr 1993 | EP |
0552580 | Jul 1993 | EP |
0552580 | Jul 1993 | EP |
0566967 | Oct 1993 | EP |
0566967 | Oct 1993 | EP |
0569313 | Nov 1993 | EP |
629956 | Dec 1994 | EP |
629956 | Dec 1994 | EP |
747829 | Dec 1996 | EP |
756235 | Jan 1997 | EP |
0760503 | Mar 1997 | EP |
0 767 431 | Apr 1997 | EP |
1 003 103 | May 2000 | EP |
0702815 | Aug 2000 | EP |
1031928 | Aug 2000 | EP |
1 099 165 | Sep 2004 | EP |
1-273395 | Nov 1989 | JP |
WO 8903086 | Apr 1989 | WO |
WO 9113404 | Sep 1991 | WO |
WO 9200834 | Jan 1992 | WO |
WO 9300632 | Jan 1993 | WO |
WO 9113475 | Jul 1993 | WO |
WO 9313475 | Jul 1993 | WO |
WO 9429795 | Dec 1994 | WO |
WO 9429796 | Dec 1994 | WO |
WO 9429807 | Dec 1994 | WO |
WO 9429807 | Dec 1994 | WO |
WO 9821656 | May 1998 | WO |
WO 9838576 | Sep 1998 | WO |
WO 9930254 | Jun 1999 | WO |
WO 9945456 | Sep 1999 | WO |
WO 9946680 | Sep 1999 | WO |
WO 9966401 | Dec 1999 | WO |
WO 0007104 | Feb 2000 | WO |
WO 0011553 | Mar 2000 | WO |
WO 0068795 | Nov 2000 | WO |
WO 0131446 | May 2001 | WO |
WO 0221281 | Mar 2002 | WO |
WO 0229573 | Apr 2002 | WO |
WO 0244862 | Jun 2002 | WO |
Number | Date | Country | |
---|---|---|---|
20020049718 A1 | Apr 2002 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09108022 | Jun 1998 | US |
Child | 09153094 | US | |
Parent | 08454921 | May 1995 | US |
Child | 09108022 | US | |
Parent | 08071643 | Jun 1993 | US |
Child | 08454921 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09127497 | Jul 1998 | US |
Child | 09854187 | US | |
Parent | 09153094 | Sep 1998 | US |
Child | 09127497 | US |