1. Field of the Invention
The present invention is related to the field of error correction techniques for an array of disks.
2. Background Art
A computer system typically requires large amounts of secondary memory, such as a disk drive, to store information (e.g. data and/or application programs). Prior art computer systems often use a single “Winchester” style hard disk drive to provide permanent storage of large amounts of data. As the performance of computers and associated processors has increased, the need for disk drives of larger capacity, and capable of high speed data transfer rates, has increased. To keep pace, changes and improvements in disk drive performance have been made. For example, data and track density increases, media improvements, and a greater number of heads and disks in a single disk drive have resulted in higher data transfer rates.
A disadvantage of using a single disk drive to provide secondary storage is the expense of replacing the drive when greater capacity or performance is required. Another disadvantage is the lack of redundancy or back up to a single disk drive. When a single disk drive is damaged, inoperable, or replaced, the system is shut down.
One prior art attempt to reduce or eliminate the above disadvantages of single disk drive systems is to use a plurality of drives coupled together in parallel. Data is broken into chunks that may be accessed simultaneously from multiple drives in parallel, or sequentially from a single drive of the plurality of drives. One such system of combining disk drives in parallel is known as “redundant array of inexpensive disks” (RAID). A RAID system provides the same storage capacity as a larger single disk drive system, but at a lower cost. Similarly, high data transfer rates can be achieved due to the parallelism of the array.
RAID systems allow incremental increases in storage capacity through the addition of additional disk drives to the array. When a disk crashes in the RAID system, it may be replaced without shutting down the entire system. Data on a crashed disk may be recovered using error correction techniques.
RAID Arrays
RAID has six disk array configurations referred to as RAID level 0 through RAID level 5. Each RAID level has advantages and disadvantages. In the present discussion, only RAID levels 4 and 5 are described. However, a detailed description of the different RAID levels is disclosed by Patterson, et al. in A Case for Redundant Arrays of Inexpensive Disks (RAID), ACM SIGMOD Conference, June 1988. This article is incorporated by reference herein.
RAID systems provide techniques for protecting against disk failure. Although RAID encompasses a number of different formats (as indicated above), a common feature is that a disk (or several disks) stores parity information for data stored in the array of disks. A RAID level 4 system stores all the parity information on a single parity disk, whereas a RAID level 5 system stores parity blocks throughout the RAID array according to a known pattern. In the case of a disk failure, the parity information stored in the RAID subsystem allows the lost data from a failed disk to be recalculated.
As shown in
RAID level 5 array systems also record parity information. However, it does not keep all of the parity sectors on a single drive. RAID level 5 rotates the position of the parity blocks through the available disks in the disk array of N+1 disk. Thus, RAID level 5 systems improve on RAID 4 performance by spreading parity data across the N+1 disk drives in rotation, one block at a time. For the first set of blocks, the parity block might be stored on the first drive. For the second set of blocks, it would be stored on the second disk drive. This is repeated so that each set has a parity block, but not all of the parity information is stored on a single disk drive. In RAID level 5 systems, because no single disk holds all of the parity information for a group of blocks, it is often possible to write to several different drives in the array at one instant. Thus, both reads and writes are performed more quickly on RAID level 5 systems than RAID 4 array.
In RAID level 5, parity is distributed across the array of disks. This leads to multiple seeks across the disk. It also inhibits simple increases to the size of the RAID array since a fixed number of disks must be added to the system due to parity requirements.
The prior art systems for implementing RAID levels 4 and 5 have several disadvantages. The first disadvantage is that, after a system failure, the parity information for each stripe is inconsistent with the data blocks stored on the other disks in the stripe. This requires the parity for the entire RAID array to be recalculated. The parity is recomputed entirely because there is no method for knowing which parity blocks are incorrect. Thus, all the parity blocks in the RAID array must be recalculated. Recalculating parity for the entire RAID array is highly time consuming since all of the data stored in the RAID array must be read. For example, reading an entire 2 GB disk at maximum speed takes 15 to 20 minutes to complete. However, since few computer systems are able to read very many disks in parallel at maximum speed, recalculating parity for a RAID array takes even longer.
One technique for hiding the time required to recompute parity for the RAID array is to allow access to the RAID array immediately, and recalculate parity for the system while it is on-line. However, this technique suffers two problems. The first problem is that, while recomputing parity, blocks having inconsistent parity are not protected from further corruption. During this time, a disk failure in the RAID array results in permanently lost data in the system. The second problem with this prior art technique is that RAID subsystems perform poorly while calculating parity. This occurs due to the time delays created by a plurality of input/output (I/O) operations imposed to recompute parity.
The second disadvantage of the prior art systems involves writes to the RAID array during a period when a disk is not functioning. Because a RAID subsystem can recalculate data on a malfunctioning disk using parity information, the RAID subsystem allows data to continue being read even though the disk is malfunctioning. Further, many RAID systems allow writes to continue although a disk is malfunctioning. This is disadvantageous since writing to a broken RAID array can corrupt data in the case of a system failure. For example, a system failure occurs when an operating system using the RAID array crashes or when a power for the system fails or is interrupted otherwise. Prior art RAID subsystems do not provide protection for this sequence of events.
The present invention is a method for providing error correction for an array of disks using non-volatile random access memory (NV-RAM).
Non-volatile RAM is used to increase the speed of RAID recovery from disk error(s). This is accomplished by keeping a list of all disk blocks for which the parity is possibly inconsistent. Such a list of disk blocks is smaller than the total number of parity blocks in the RAID subsystem. The total number of parity blocks in the RAID subsystem is typically in the range of hundreds of thousands of parity blocks. Knowledge of the number of parity blocks that are possibly inconsistent makes it possible to fix only those few blocks, identified in the list, in a significantly smaller amount of time than is possible in the prior art. The present invention also provides a technique of protecting against simultaneous system failure and a broken disk and of safely writing to a RAID subsystem with one broken disk.
A method and apparatus for providing error correction for an array of disks using non-volatile random access memory (NV-RAM) is described. In the following description, numerous specific details, such as number and nature of disks, disk block sizes, etc., are described in detail in order to provide a more thorough description of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without these specific details. In other instances, well-known features have not been described in detail so as not to unnecessarily obscure the present invention.
In particular, many examples consider the case where only one block in a stripe is being updated, but the techniques described apply equally well to multi-block updates.
The present invention provides a technique for: reducing the time required for recalculating parity after a system failure; and, preventing corruption of data in a RAID array when data is written to a malfunctioning disk and the system crashes. The present invention uses non-volatile RAM to reduce these problems. A description of the prior art and its corresponding disadvantages follows. The disadvantages of the prior art are described for: parity corruption on a system failure; data corruption on write with broken disk; and, data corruption with simultaneous system and disk failures.
Recomputing Lost Data with RAID
Parity is computed by Exclusive-ORing the data blocks stored in a stripe. The parity value computed from the N data blocks is recorded in the parity block of the stripe. When data from any single block is lost (i.e., due to a disk failure), the lost data for the disk is recalculated by Exclusive-ORing the remaining blocks in the stripe. In general, whenever a data block in a stripe is modified, parity must be recomputed for the stripe. When updating a stripe by writing all N data blocks, parity can be computed without reading any data from disk and parity and data can be written together, in just one I/O cycle. Thus, writing to all N data blocks in a stripe requires a minimum amount of time. When writing a single data block to disk, parity-by-subtraction is used (described below). One I/O cycle is required to read the old data and parity, and a second I/O cycle is required to write the new date and parity. Because the spindles of the disks in the RAID array are not synchronized, the writes do not generally occur at exactly the same time. In some cases, the parity block will reach the disk first, and in other cases, one of the data blocks will reach the disk first. The techniques described here do not depend on the order in which blocks reach the disk.
Another alternative for disks having non-synchronized spindles is for parity to be computed first and the parity block written to disk before a data block(s) is written to disk. Each data block on a disk in the RAID array stores 4 KB of data. In the following discussion, the data in each 4 KB block is viewed as a single, large integer (64 K-bits long). Thus, the drawings depict integer values for information stored in the parity and data disk blocks. This convention is used for illustration only in order to simplify the drawings.
Data 1=Parity−Data 0−Data 2=12−4−1=7, (1)
where data block 1 is computed using the parity block, data block 0 and data block 2. Thus, the data value 7 stored in data block 1 of disk 334 shown in
As shown in
When new data is written to a data block, the parity block is also updated. Parity is easily computed, as described above, when all data blocks in a stripe are being updated at once. When this occurs, the new value for parity is recalculated from the information being written to the disks. The new parity and data blocks are then written to disk. When only some of the data blocks in a stripe are modified, updating the parity block is more difficult since more I/O operations are required. There are two methods for updating parity in this case: parity update by subtraction; and, parity update by recalculation.
For example, when a single data block is written, the RAID system can update parity by subtraction. The RAID system reads the parity block and the block to be overwritten. It first subtracts the old data value from the parity value, adds the new data value of the data block to the intermediate parity value, and then writes both the new parity and data blocks to disk.
For recalculation of parity, the RAID system first reads the other N−1 data blocks in the stripe. After reading the N−1 data blocks, the RAID system recalculates parity from scratch using the modified data block and the N−1 data blocks from disk. Once parity is recalculated, the new parity and data blocks are written to disk.
Both the subtraction and recalculation technique for updating parity can be generalized to situations where more than one data block is being written to the same stripe. For subtraction, the parity blocks and the current contents of all data blocks that are about to be overwritten are first read from disk. For recalculation, the current contents of all data blocks that are not about to be overwritten are first read from disk. The instance where all N data blocks in the stripe are written simultaneously is a degenerate case of parity by recalculation. All data blocks that are not being written are first read from disk, but in this instance, there are no such blocks.
How Stripes Become Inconsistent During System Failure
An inconsistent stripe comprises a parity block that does not contain the Exclusive-OR of all other blocks in the stripe. A stripe becomes inconsistent when a system failure occurs while some of the writes for an update have been completed but others have not. For example, when a first data block is being overwritten. As previously described, the parity block for the stripe is recomputed and overwritten as well as the data block. When the system fails after one of the data blocks has been written to disk, but not the other, then the stripe becomes inconsistent.
A stripe can only become inconsistent when it is being updated. Thus, the number of potentially inconsistent stripes at any instant is limited to the number of stripes that are being updated. For this reason, the present invention maintains a list in NV-RAM comprising all the stripes that are currently being updated. Since only these stripes can potentially be corrupted, parity is recalculated after a system failure for only the stripes stored in the list in NV-RAM. This greatly reduces the total amount of time required for recalculating parity after a system failure in comparison to the prior art methods, described previously, that take much longer.
Parity Corruption on a System Failure in the Prior Art
In the following diagrams, the value indicated within parentheses for a malfunctioning data disk is not an actual value stored on disk. Instead, it is a calculated value retained in memory for the broken disk in the RAID array.
When a system failure occurs between time TB and TC in
Parity=Data 0+Data 1+Data 2=2+7+1=10≠12. (2)
Similarly,
When a system failure occurs between time TB and TC in
Parity=Data 0+Data 1+Data 2=4+7+1=12≠10. (3)
In the prior art, after a system fails, parity is recalculated for all of the stripes on occurrence of a system restart. This method of recalculating parity after a failure for all stripes requires intensive calculations, and therefore, is very slow. The present invention is a method for recalculating parity after a system failure. The system maintains a list of stripes having writes in progress in non-volatile RAM. Upon restarting after a system failure, just the list of stripes with writes in progress that are stored in non-volatile RAM are recalculated.
Data Corruption on Write with Broken Disk in the Prior Art
When writing to a RAID array that has a malfunctioning or broken disk, data corruption occurs during system failure.
Data 0=Parity−Data 1−Data 2=12−4−1=7. (4)
At time TB, a new value of 2 is written to data disk 0 (indicated by enclosing 2 within a box). At time TB, parity has not been updated for the new value of 2 written to data disk 0 and has a value of 12. Thus, the computed value for data block 1 is 9 instead of 7. This is indicated in
When operating normally at time TC, the parity block is updated to 10 due to the value of 2 written to data block 0 at time TB. The new value of 10 for parity at time TC is indicated within a rectangle. For a parity value of 10, the correct value of 7 for data block 1 is indicated within parentheses. As indicated in the
When a system failure occurs between times TB and TC, writing to a RAID array that has a malfunctioning or broken disk corrupts data in the stripe. As shown in
Data 1=Parity−Data 0−Data 2=12−2−1=9≠7. (5)
Similar corruption of data occurs for the case where parity reaches disk before data does.
RAID systems are most likely to experience a disk failure when a system failure occurs due to power interruption. Commonly, a large, transient voltage spike occurring after power interruption damages a disk. Thus, it is possible for a stripe to be corrupted by simultaneous system and disk failures.
At time TC, parity is not updated due to the system failure and therefore has a value of 12 instead of 10. Further, data disk 1 is corrupted due to the disk failure. The computed value of 9 for data block 1 is incorrect. It is computed incorrectly for data disk 1 using the corrupt parity value as follows:
Data 1=Parity−Data 0−Data 2=12−2−1=9≠7. (7)
Data is similarly corrupted for the case where parity reaches disk before data.
Overview of the Present Invention
NV-RAM 816 is used to increase the speed of RAID recovery after a system failure by maintaining a list of all parity blocks stored on parity disk 820 that are potentially inconsistent. Typically, this list of blocks is small. It may be several orders of magnitude smaller than the total number of parity blocks in the RAID array 828. For example, a RAID array 828 may comprise hundreds of thousands of parity blocks while the potentially inconsistent blocks may number only several hundred or less. Knowledge of the few parity blocks that are potentially inconsistent facilitates rapid recalculation of parity, since only those parity blocks have to be restored.
The present invention also uses NV-RAM 816 to safely write data to a RAID array 828 having a broken disk without corrupting data due to a system failure. Data that can be corrupted is copied into NV-RAM 816 before a potentially corrupting operation is performed. After a system failure, the data stored in NV-RAM 816 is used to recover the RAID array 828 into a consistent state.
Referring now to
At step 1105, the stripe number is obtained. At step 1106, the data blocks of the identified stripe required to recompute parity are read. Parity is recomputed for the stripe at step 1107. At step 1108, the new parity block for the stripe is written. The system then returns to decision block 1104.
Normal operation is illustrated in
Parity Corruption for a System Failure Using NV-RAM
At time TC, in step 1112, the new data value of 2 is written (indicated by a box around the value 2) to data block 0, thereby replacing the value of 4 that is stored in data block 0 at time TB. The other values stored in data blocks 1 and 2 do not change. First, consider the normal case where the system does not fail. The present invention writes a new parity value of 10 (indicated by a box under the parity heading) at time TD in step 1112. This updates the parity block for the write to data block 0 at time TC. At time TE, in step 1113, the stripe number in NV-RAM is cleared. Thus, the stripe comprising the blocks for the parity disk and data disks 0–2 have values of 10, 2, 7, and 1, respectively.
Next, consider the ruse when the system does fail between time ti and tD (between steps 1111 and 1113). The system reboots, and begins execution at START in
In decision block 1101, at time TD, when a system fault occurs, decision block 1101 returns true (Yes). The stripe has a value of 12 (indicated by an underline) for parity and values for data disks 0–2 of 2, 7, and 1, respectively. As illustrated in
Parity=Data 0+Data 1+Data 2=2+7+1=10≠12. (9)
However, the stripe can be recovered to a consistent state. NV-RAM includes an indication of the stripes that are candidates for recovery, i.e. a list of stripes that are being updated. Everything but the parity value is available on disk (the “2” having been written to disk at time TC). The data values for the stripe are read from disk and a new parity value of 10 is calculated.
Parity=Data 0+Data 1+Data 2=2+7+1=10. (10)
Thus, the newly calculated parity value of 10 is written to the parity disk in step 1108 at time TD, and the stripe is no longer corrupt.
The following is an example of pseudo code that describes the operation of
After a system failure, a part of the start-up procedure of
The previous section describes a technique in which a list of potentially corrupted stripes is kept in NV-RAM so that on reboot after a system failure, only the stripes in the list need to have their parity blocks recalculated. An alternate embodiment of the present invention uses a bitmap in NV-RAM to indicate the potentially corrupted stripes whose parity blocks must be recalculated after a system failure.
This technique uses a bitmap in which each bit represents a group of one or more stripes. A typical disk array might have 250,000 stripes. If each entry in the bitmap represents a single stripe, the bitmap will be about 32 KB. Letting each bit represent a group of 32 adjacent stripes reduces the size to 1 KB.
After a system failure, this technique is essentially identical to the “list of stripes” technique, except that the bitmap is used to determine which stripes need parity recalculation instead of the list. All stripe in groups whose bit is set in the bitmap have their parity recalculated.
Managing the bitmap during normal operation is slightly different than managing the list. It is no longer possible to clear a stripe's entry as soon as the update is complete, because a single bit can indicate activity in more than one stripe. One stripe's update may be done, but another stripe sharing the same bit may still be active.
Instead, the appropriate bit for a stripe is set just before the stripe is updated, but it is not cleared after the update is complete. Periodically, when the bitmap has accumulated too many entries, all blocks are flushed to disk, ensuring that there can be no inconsistent stripes, and the entire bitmap is cleared. The following pseudo-code implements this:
In case of system failure, the bitmap results in more blocks to clean than the list, but the savings are still considerable compared with recomputing parity for all stripes in the system. A typical RAID system has 250,000 stripes, so even if 2,500 potentially-corrupted stripes are referenced in the bitmap, that is just 1% of the stripes in the system.
The bitmap technique is especially useful with write-caching disks which don't guarantee that data will reach disk in the case of power failure. Such disks may hold data in RAM for some period before actually writing it. This means that parity corruption is still a possibility even after the stripe update phase has completed. The list technique would not work, because the stripe's parity is still potentially corrupted even though the stripe has been removed from the list.
Thus, using the bitmap technique and instructing each disk to flush its internal cache at the same time that the bitmap is cleared, allows the invention to work in combination with write-caching disk drives.
Data Corruption on Write with Broken Disk Using NV-RAM
The present invention solves this problem for data corruption on occurrence of a write with a malfunctioning disk by saving data from the broken disk in non-volatile RAM.
At time TB, a value of 7 for the malfunctioning data disk 1 is written into NV-RAM according to step 1109. The value of 7 for data disk 1 that is written into NV-RAM is indicated by a rectangular box in
At time TC, a new value of 2 (indicated by a box) for data disk 0 is written to the disk before parity for the stripe is updated according to step 1112. Therefore, at time TC, the value for data disk 1 is 9 and is indicated within parentheses accordingly. In the normal case, where the system does not fail, a new parity value of 10 is written to disk at time TD, and the computed value of disk 1 becomes 7 again, which is correct. When a system failure occurs between times TC and TD, a new value of parity is updated correctly using NV-RAM with respect to the value of 2 written to data disk 0 at time TC.
The parity is correctly updated at time TD by first reading the value for all functioning data disks, according to step 1106, stored in NV-RAM, and recalculating its value as follows:
Parity=Data 0+NV-RAM+Data 2=2+7+1=10. (12)
Thus, a correct value of 10 is computed for parity when the present invention restarts after a system crash. In step 1108, the value of 10 is written to the parity disk at time TD, thus returning the computed value of D1 to 1, which is correct. At time TE, NV-RAM is cleared in step 1113. Thus, the present invention prevents data from being corrupted by a system fault when a disk is malfunctioning by using NV-RAM.
At time TB, a value of 7 for the malfunctioning data disk 1 is written into NV-RAM according to step 1109. The value of 7 for data disk 1 that is written into NV-RAM is indicated by a rectangular box in
At time TC, a new value of 10 (indicated by a box) for parity is written to the parity disk in step 1108 before data block 0 is updated. Therefore, at time TC, the value for data disk 1 is 5 and is indicated within parentheses accordingly. When a system failure occurs between times TC and TD, a new parity value is updated correctly for the parity disk using NV-RAM. At decision block 1101 after the system reboots, a check is made if a system failure occurred. The decision block accordingly returns true (Yes) in the present example, and continues at step 1104.
Parity is correctly updated at time TD by recalculating its value as follows:
Parity=NV-data for broken disk (7)+on-disk data for all non broken disks=4+7+1=12. (13)
Thus, as shown in
Simultaneous System and Disk Failure Using NV-RAM
The present invention solves the problem of parity and data corruption when simultaneous system and disk failures occur by saving blocks of stripes in NV-RAM. Using NV-RAM allows the system to be recovered to a consistent state when a system crash occurs while updating multiple blocks (in the following example, data blocks 0 and 1) in the system. Changing these data blocks further requires that the parity of the stripe be updated. The present invention always saves into NV-RAM any block that is read from disk (e.g., before updating data block 0, read it into NV-RAM) for this purpose. Thus, stripe information can be recomputed from the data stored in NV-RAM. The present invention provides two solutions for this using parity by subtraction and parity by recalculation.
In parity by subtraction, data including parity and data blocks is read from disk before it is updated.
At time TB, the parity block and data block 0 are written into NV-RAM as they are read from disk. The parity block and data block 0 that are written into NV-RAM are indicated by a rectangular box in
At time TC, the new value of 2 (indicated by a box) for data disk 0 is written to the disk before parity for the stripe is updated. When a system failure occurs between times TC and TD, a disk in the RAID array malfunctions, and thus the present invention provides solutions for the three cases of a broken disk: the parity disk; data disk 0; and, data disk 2 (or 3). At decision block 1101, a check is made if a system failure occurred. The decision block accordingly returns true (Yes) in the present example, and continues at step 1104. The three cases of a broken disk due to system failure where parity is calculated by subtraction are shown in
At time TD in
Referring to
parity=“NV-value for broken disk”+“on-disk values for all non-broken disks”
In the present example that becomes:
parity=NV(Data 0)+Data 1+Data 2=4+7+1=12
In effect, the parity is being updated so as to restore the broken disk to the value stored for it in the NV-RAM. In this particular example, the new value for parity happens to match the old value. If other data blocks besides data 0 were also being updated, and if one of them reached disk before the system failure, then the new parity value would not match the old.
Referring to
Data 1=NV(Parity)−NV(Data 0)−Data 2=12−4−1=7, (14)
where NV(Parity) and NV(Data 0) are the values for parity and data block 0 stored in NV-RAM. At time TE, NV-RAM is cleared. Thus, in
This case can also be addressed by first calculating the old contents of the broken disk as follows:
D1−calc=NV-parity−“NV values for disks being updated”.−“on-disk values of data disks not being updated”.
A new parity value is calculated based on:
parity=“D1−calc from stepabove”+on-disk values for all no-busted data disks”.
Simultaneous System and Disk Failure with Parity by Recalculation
In parity by recalculation, the data blocks that are not being updated are first read from disk, and then parity is recalculated based on these values combined with the new data about to be written. This is typically used in cases where multiple data blocks are being updated at once, because it is more efficient than parity by subtraction in those cases. For simplicity, in the present example, only one block is updated. The techniques shown apply for updates of any number of blocks.
At time TA in step 1109, blocks D1 and D2 are read from disk. In step 1110, the system computes the new parity based on the new data for disk 0 along with the data just read from disks 1 and 2.
At time TB in step 1111, blocks D1 and D2 are written into NV-RAM, along with an indication of the stripe to which they belong.
At time TC, during step 1112, the new value “2” is written to disk 0. In the normal case, the parity block would also have been written during step 1112, and there would be no corruption.
In the present example, there is a system failure in combination with a disk failure. When the system reboots after a system failure, execution begins at step 1101. Because there is a failure, the decision block returns true (Yes) and continues at step 1102 and performs the necessary steps to recover the RAID sub-system based on the contents of NV-RAM.
In this case, the present invention computes a new parity value that sets the contents of the failed disk to zero. The general equation for this is:
parity=sum of non-broken disks
And in this example that is:
parity=D1+D2=7+1=8
At time TE, the new parity value is written, and at time TF, the NV-RAM values for D1 and D2 are cleared.
With a prior-art file system that writes new data in the same location as old data, zeroing out a data block would be unacceptable. But with WAFL, which always writes new data to unused locations on disk, zeroing a block that was being written has no harmful effect, because the contents of the block were not part of the file system.
parity=“NV-RAM value for failed disk”+“on-disk values for non-failed disks:”
In the present example, that is:
parity=NV(D1)+D0+D2=7+2+1=10
At time TE, the new parity value is written, and at time TF, the NV-RAM values for D1 and D2 are cleared.
In this manner, a method and apparatus are disclosed for providing error correction for an array of disks using non-volatile random access memory (NV-RAM).
This is a preliminary amendment for a continuation of application Ser. No. 09/345,246 filed Jun. 8, 1999 (now allowed, projected to issue as U.S. Pat. No. 6,480,969 B1 on Nov. 12, 2002), which is a continuation of application Ser. No. 08/471,218, filed Jun. 5, 1995 (now U.S. Pat. No. 5,948,110), which is a continuation of application Ser. No. 08/071,798, filed Jun. 4, 1993 (now abandoned). This application also is a continuation of PCT application Ser. No. PCT/US94/06321 filed Jun. 2, 1994.
Number | Name | Date | Kind |
---|---|---|---|
4761785 | Clark et al. | Aug 1988 | A |
5088081 | Farr | Feb 1992 | A |
5134619 | Henson et al. | Jul 1992 | A |
5146588 | Crater et al. | Sep 1992 | A |
5195100 | Katz et al. | Mar 1993 | A |
5208813 | Stallmo | May 1993 | A |
5235601 | Stallmo et al. | Aug 1993 | A |
5239640 | Froemke et al. | Aug 1993 | A |
5255270 | Yanai et al. | Oct 1993 | A |
5274799 | Brant et al. | Dec 1993 | A |
5305326 | Solomon et al. | Apr 1994 | A |
5315602 | Noya et al. | May 1994 | A |
5335235 | Arnott | Aug 1994 | A |
5390327 | Lubbers et al. | Feb 1995 | A |
5452444 | Solomon et al. | Sep 1995 | A |
5488731 | Mendelsohn | Jan 1996 | A |
5550975 | Ichinomiya et al. | Aug 1996 | A |
5948110 | Hitz et al. | Sep 1999 | A |
Number | Date | Country |
---|---|---|
0 492 808 | Jul 1992 | EP |
0 497 067 | Aug 1992 | EP |
0 559 488 | Sep 1993 | EP |
0 569 313 | Nov 1993 | EP |
0 829 956 | Dec 1994 | EP |
0 747 829 | Dec 1996 | EP |
0 756 235 | Jan 1997 | EP |
0 462 917 | Sep 1999 | EP |
1 031 928 | Aug 2000 | EP |
1 031 928 | Aug 2000 | EP |
4-278641 | Oct 1992 | JP |
WO 9113405 | Sep 1991 | WO |
WO 9429795 | Dec 1994 | WO |
WO 9821658 | May 1998 | WO |
Number | Date | Country | |
---|---|---|---|
20030037281 A1 | Feb 2003 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09345246 | Jun 1999 | US |
Child | 10233311 | US | |
Parent | 08471218 | Jun 1995 | US |
Child | 09345246 | US | |
Parent | 08071798 | Jun 1993 | US |
Child | 08471218 | US | |
Parent | PCT/US94/06321 | Jun 1994 | US |
Child | 08071798 | US |