The present invention relates generally to continuous data protection, and more particularly, to maintaining data in a continuous data protection system.
Data protection, used for backing up various types of data, from simple letters to mission-critical enterprise data, has typically been implemented by duplicating data via techniques such as mirroring with RAID (redundant array of independent disks). Mirroring relates to the physical media on which the data is stored, and protects against media failure. When a logical data loss or corruption occurs, such as an accidental file deletion or a virus, the data must be restored from a backup system.
Even when used in conjunction with mirroring, all of the disks in a RAID system reflect the current state of the primary disk; i.e., any data loss or corruption will also exist on the disks in the array. Backup copies are generally made on a periodic basis, and reflect the state of the primary disk at a given point in time. Because backups are not made on a continuous basis, when data is restored, there will be some data loss from the time of the backup until the time of the data loss. In a mission-critical setting, such a data loss, even if only for a brief period, can be catastrophic. Beyond the potential data loss, restoring a primary disk from a backup system, can become complicated and take several hours or more. This additional downtime further exacerbates the problems associated with a logical data loss.
The traditional process of backing up data to tape media and time driven and time dependent. A backup process typically is run at regular intervals and covers a certain period of time. For example, a full system backup may be run once a week on a weekend, and incremental backups may be run every weekday at some time after the close of business (usually overnight). These individual backups are then saved for a predetermined period of time, according to a retention policy. In order to conserve tape media and storage space, older backups are gradually faded out and replaced by newer backups. Further to the above example, after a full weekly backup is completed, the daily incremental backups for the preceding week may be discarded, and each weekly backup may be maintained for a few months, to be replaced by monthly backups.
While the backup creation process can be automated to a great extent, restoring data from a backup remains a manual and time-critical process. First, the appropriate backup tapes need to be located, including the latest full backup and any incremental backups made since the last full backup. In the event that only a partial restoration is required, locating the appropriate backup tape can take even longer. Then after the backup tapes are located, they must be restored to the primary disk. As can be imagined, even under the best of circumstances, this type of backup and restore process cannot guarantee high availability of data.
Another type of data protection involves making point in time (PIT) copies of data. A first type of PIT copy is a hardware-based PIT copy, which is a mirror of the primary disk onto a secondary disk. The main drawbacks to a hardware-based PIT copy are that the data ages quickly and that each copy takes up as much disk space as the primary disk. A software-based PIT, typically called a “snapshot,” is a “picture” of a file system's data structure. Various types of software-based PITs exist, and most are tied to a particular platform, operating system, or file system. These snapshots also have drawbacks, including occupying space on the primary disk, rapid aging, and possible dependencies on data stored on the primary disk wherein data corruption on the primary disk leads to corruption of the snapshot. Furthermore, both types of PIT techniques still require the traditional tape-based backup and restore process.
Using a PIT copy means that a copy is made at a discrete moment in time, and restores based upon that PIT copy are based on that discrete moment in time. In contrast an any point in time (APIT) copy implies all moments in time, i.e., a continuous copy. It is therefore desirable to have a continuous data protection scheme in which a copy can be selected from any point in time.
The present invention is a method and system where data is maintained in a continuous data protection system. A primary volume may be protected according to an any-point-in-time (APIT) window wherein restores may be performed at any time as desired. The APIT window may be of any time duration as desired. Outside of the APIT window, a retention policy for phasing out data may be established as desired.
A more detailed understanding of the invention may be had from the following description of a preferred embodiment, given by way of example, and to be understood in conjunction with the accompanying drawings wherein:
In the present invention, data is backed up continuously, allowing system administrators to pause, rewind, and replay live enterprise data streams. This moves the traditional backup methodologies into a continuous background process in which policies automatically manage the lifecycle of many generations of restore images.
It is noted that the primary data volume 104 and the secondary data volume 108 can be any type of data storage, including, but not limited to, a single disk, a disk array (such as a RAID), or a storage area network (SAN). The main difference between the primary data volume 104 and the secondary data volume 108 lies in the structure of the data stored at each location, as will be explained in detail below. It is noted that there may also be differences in terms of the technologies that are used. The primary volume is typically an expensive, very fast, highly available storage subsystem, whereas the secondary volume is typically cost-effective, high capacity and comparatively slow (for example, ATA/SATA disks). Normally, the slower secondary volume cannot be used as a synchronous mirror to the high-performance primary volume. This is because the slow response time would have an adverse impact on the overall system performance. The disclosed data protection system, however, is optimized to keep up with high-performance primary volumes. These optimizations are described in more detail below. At a high level, random writes to the primary volume are processed sequentially on the secondary storage. Sequential writes improve both the cache behavior, as well as the actual volume performance of the secondary volume. In addition, it is possible to aggregate multiple sequential writes on the secondary volume, whereas this is not possible with the random writes to the primary volume. Also note that the present invention does not require writes to the data protection system to be synchronous. However, even in the case of an asynchronous mirror, minimizing latencies is important.
It is noted that the data protection system 106 operates in the same manner, regardless of the particular construction of the protected computer system 100, 120, 140. The major difference between these deployment options is the manner and place in which a copy of each write is obtained. To those skilled in the art it is evident that other embodiments, such as the cooperation between a switch platform and an external server, are also feasible.
In practice, certain applications require continuous data protection with a block-by-block granularity, for example, to rewind individual transactions. However, the period in which such fine granularity is required is, generally, relatively short (for example two days), which is why the system can be configured to fade out data over time. The present invention discloses data structures and methods to manage this process automatically.
Because data is continuously backed-up in the present invention, reversing each write to get to a particular point in time quickly becomes unfeasible where hundreds, thousands or more writes are logged every second. The amount of data simply becomes too large to scan in a linear fashion. The present invention therefore provides data structures (i.e. delta maps) so that such voluminous amounts of backup data may be efficiently tracked and accessed, as desired.
In typical recovery scenarios, it is necessary to examine how the primary volume looked like at multiple points in time before deciding which point to recover to. For example, consider a system that was infected by a virus. In order to recover from this virus, it is necessary to examine the primary volume as it was at different points in time in order to find the latest recovery point where the system was not yet infected by the virus. In order to efficiently compare multiple potential recovery points, additional data structures are needed. Delta maps provide a mechanism to efficiently recover the primary volume as it was at a particular point in time, without the need to replay the write log in its entirety, one write at a time. In particular, delta maps are data structures that keep track of data changes between two points in time. These data structures can then be used to selectively play back portions of the write log such that the resulting point-in-time image is the same as if the log were played back one write at a time, starting at the beginning of the log.
Referring now to
In a preferred embodiment, as explained above, each delta map contains a list of all blocks that were changed during the particular time period to which the delta map corresponds. That is, each delta map specifies a block region on the primary volume, the offset on the primary volume, and physical device information. This information can then be used to recreate the primary volume as it looked like at a previous point in time. For example, assume that a volume was brand new and that only the two writes in delta map 150 have been committed to it. The map thus contains a list of all modifications since the volume was in its original state. In order to recreate the volume as it was after these two writes (for example, after a failure of the primary volume), the system examines the first two entries in the delta map. These entries are sufficient to determine that a block of data had been written to region R0 on the primary disk at offset 100 and that the length of this write was 20. In addition, fields 156 and 158 can be used to determine where the duplicate copy was written on the secondary volume. This process can then be repeated for each entry and an exact copy of the primary volume at that time can be recreated in this fashion. It is noted that other fields or a completely different mapping format may be used while still achieving the same functionality. For example, instead of dividing the primary volume into block regions, a bitmap could be kept, representing every block on the primary volume.
Referring now to
As explained above, each delta map includes information necessary to recreate the changes to the protected volume for a particular time window. For example, delta map 202t0-t1 corresponds to the change of the protected volume between time zero and time one, delta map 202t1-t2 corresponds to the change of the protected volume from time one to time two, and so forth. It is noted that these time windows do not necessarily need to be of equal size. If a primary volume is completely destroyed at time n+1, a full restore as of time n may be performed by simply using merged delta map 206t0-tn. If a loss occurs at time three, and the primary volume needs to be restored, merged delta map 202t0-t3 may be used. If a loss occurs at time five and the system needs to be restored to time four, merged delta map 204t0-t3 and delta map 204t3-t4 may be used.
As shown in
Referring now to
Delta map 450 includes the originating and terminating entries for writes 456 and 458 while delta map 452 includes originating and terminating entries for writes 460 and 462. In delta map 450, the two top entries are the originating and terminating entries for write 456 and the two bottom entries are the originating and terminating entries for write 458. Similarly, the two top entries in delta map 452 are the originating and terminating entries for write 460 and the two bottom entries are the originating and terminating entries for write 462. As explained above, the delta maps 450 and 452 include the specifics regarding each write that occurred during the time period covered by the particular delta map.
Delta maps 450 and 452 may be merged into a single merged map 454. One significant benefit of merging delta maps is a reduction in the number of entries that are required. Another, even more significant benefit, is a reduction in the number of blocks that need to be kept on the secondary volume once the lower-level maps are expired. It is noted, however, that this is only the case when a previous block was overwritten by a newer one. For example, in this particular scenario, it is possible to eliminate the terminating entry 468 of write 462 because writes 462 and 458 are adjacent to each other on the primary volume. That is, because there is a terminating entry 468 with the same offset (i.e. 240) as an originating offset 470, the terminating entry 468 may be eliminated in merged delta map 454. By way of further example, if a subsequent write was performed that entirely filled region two (i.e. R2), and the map containing that write was merged with map 454, all of the entries related to R2 would be replaced with the R2 originating and terminating entries for the subsequent write. In this case, it will also be possible to free up the blocks in this region once the delta maps are expired. The delta maps and the structures created by merging maps reduces the amount of overhead in maintaining the mapping between the primary and secondary volumes over time.
Referring now to
If writing to the primary volume is complete (step 308), the method proceeds to step 310 wherein the status (i.e., an indication that the primary volume write is complete) is sent to the host. It is important to note that a “good” status can be returned without regard to whether the data made it to the secondary volume in this embodiment. This is advantageous for performance reasons. However, a synchronous embodiment is also possible as described above. The method 300 then proceeds to step 312 to check for errors in the primary volume write. If an error has occurred, an additional entry is added to the write log in step 314 reflecting the fact that an error has occurred and then the method 300 proceeds to step 316. If no error has occurred, the method 300 proceeds directly from step 312 to step 316. In step 316, it is confirmed whether writing to the secondary volume is complete. Once the write is completed, the method 300 proceeds to step 318 to check for errors in the secondary volume write. If an error has occurred, an entry reflecting the fact that an error has occurred is added to the write log (step 320).
By way of further explanation, the sequence of events performed when a host computer performs a write to a primary volume is shown in
If the write is to a clean region, a synchronous update of the dirty region log (DRL) is performed 354. Then, both the primary and secondary writes are started 356, 358. Once the primary write is completed 360, the status is returned to the host 362. If a host-based volume manager is used, this happens independently of the secondary write. If an error occurred in the primary write, it is indicated in the write log by adding an additional entry. Of course, there are two possibilities with respect to the completion of the secondary write (i.e. the duplicate write made to a secondary volume). That is, the secondary write may be completed before 364 or after 366 the completion of the primary write 360. It is noted that whether the secondary write is completed before 364 or after 366 does not affect implementation of the present invention. As with completion of the primary write, if an error occurred in the secondary write, it is indicated in a write log by adding an additional entry to the write log.
Referring now to
By way of explanation, assuming the present invention is implemented to protect a system that is currently in production. Such a system already contains important data on the primary volume at time t0. Hence, if the system starts recording the changes from this point on, the volume cannot be reconstructed at a later time unless there is a copy of the volume as it was at time t0. The initial full copy does exactly this—it initializes the secondary volume to a state where the contents of each block at time t0 are known. It is noted that in the special case when the primary volume is empty or will be formatted anyway, users have the option to disable the initial full copy. This means, however, that if they want to restore the volume back to time t0, an empty volume will be presented. To provide further explanation, assume that the primary volume already contains important data. In that case, if blocks 5, 9, and 57 are overwritten at times t1, t2, and t3 respectively, it is not possible to present a complete volume image as it was at time t2. This is because it is not known what blocks 1, 2, 3, 4, 6, 7, 8, 10, 11, etc. looked like at that time without having taken a full copy snapshot first.
Referring again to
In step 410, a delta map is created by converting the time-ordered write log entries to a block-ordered delta map. Next, in step 412, it is determined whether pre-merge optimization will be performed. If so, delta maps are periodically merged to provide a greater granularity in the data on the secondary volume (step 414). It is noted that merging of delta maps can occur at any time and according to any desired policy. In the preferred embodiment, the merging algorithm looks for adjacent delta maps with the same expiration policy and merges those. This minimizes the number of merge operations that will be required upon expiration. In a different embodiment, pre-merging occurs automatically after a certain number of writes W or after a certain time period T. Additionally, the system is capable of storing full maps of the primary volume at various points in time. This significantly accelerates the merging process later because fewer maps need to be merged. Regardless of whether pre-merge optimization is performed (step 412), the method 400 cycles back to step 404.
Referring now to
Whenever a snapshot is triggered, a PIT map is created (
The creation of a PIT map can be performed dynamically, providing access to the snapshot immediately. In the case when an access to data is in a region of the PIT map that has not yet been fully resolved (merged), the delta map merging is performed immediately for that region. PIT maps may be stored persistently or retained as temporary objects and the volumes that are presented on the basis of these PIT maps are preferably read/writeable. When PIT maps are stored as temporary objects, new writes are stored in a temporary area such that the previous point in time can be recreated again without the new writes. However, as explained above, these temporary writes may be retained for the long term. When PIT maps are stored persistently, information about a PIT map including the disk location, where the map is stored, and the point in time of the PIT map are also made persistent. In this case, in the event of a restart, any task that was active for a PIT map will be restarted by the map manager.
It is noted that snapshots may be taken and thus corresponding PIT maps created simply to improve system performance. When merging maps, it is never necessary to return further back than the most recent PIT map because by definition the PIT map includes mapping information for every block on disk at whatever time the PIT map was created. For example, referring again to
Referring now to
Once the snapshot is selected, the delta map corresponding to the time of the snapshot (i.e. t(s)), is selected in step 504. To minimize the number of delta maps that need to be merged in order to create the PIT map, it is preferable to identify the last time blocks in the snapshot have changed (step 506). Then, in step 508, the delta maps from time zero (i.e. t(0)) to t(s) are merged to create a PIT map. The PIT map may then be used as desired to restore the primary system to its state as of t(s). Of course, where a PIT map exists that was taken prior to the time at which the snapshot was taken, the maps between that PIT map and the snapshot may simply be merged and not all of the maps back to time 0.
It is noted that outside the APIT window, some data will get phased out. Deciding which data to phase out is similar to a typical tape rotation scheme. That is, a policy is entered by the administrator that decides to keep data that was recorded, for example at each minute boundary. It is also noted that the present invention provides versioning capabilities with respect to snapshots (i.e. file catalogues, scheduling capabilities, etc.) as well as the ability to establish compound/aggregate policies, etc. when outside an APIT window.
Referring now to
The older the data is, however, the less likely it is that snapshots between small time intervals will be needed. That is, the older the data, the less granularity that is required in the secondary volume. Therefore, in step 604, it is determined whether the retention time has expired for any of the data. If no, the method 600 cycles back to step 602 where additional short-time-interval snapshots are created. If yes (i.e. the retention time for the snapshot has expired), longer interval snapshots may be created by merging delta maps for all short-interval snapshots (step 606). From step 606, the method 600 proceeds to step 608 where the retention time is set to a longer interval.
Although the present invention has been described in detail, it is to be understood that the invention is not limited thereto, and that various changes can be made therein without departing from the spirit and scope of the invention, which is defined by the attached claims.
This application claims priority from U.S. Provisional Application No. 60/541,626, entitled “METHOD AND SYSTEM FOR CONTINUOUS DATA PROTECTION,” filed on Feb. 4, 2004; and U.S. Provisional Application No. 60/542,011, entitled “CONTINUOUS DATA PROTECTION IN A COMPUTER SYSTEM,” filed on Feb. 5, 2004, both of which are incorporated by reference as if fully set forth herein.
Number | Name | Date | Kind |
---|---|---|---|
4635145 | Horie et al. | Jan 1987 | A |
4727512 | Birkner et al. | Feb 1988 | A |
4775969 | Osterlund | Oct 1988 | A |
5235695 | Pence | Aug 1993 | A |
5297124 | Plotkin et al. | Mar 1994 | A |
5438674 | Keele et al. | Aug 1995 | A |
5455926 | Keele et al. | Oct 1995 | A |
5485321 | Leonhardt et al. | Jan 1996 | A |
5666538 | DeNicola | Sep 1997 | A |
5673382 | Cannon et al. | Sep 1997 | A |
5774292 | Georgiou et al. | Jun 1998 | A |
5774643 | Lubbers et al. | Jun 1998 | A |
5774715 | Madany et al. | Jun 1998 | A |
5805864 | Carlson et al. | Sep 1998 | A |
5809511 | Peake | Sep 1998 | A |
5809543 | Byers et al. | Sep 1998 | A |
5854720 | Shrinkle et al. | Dec 1998 | A |
5864346 | Yokoi et al. | Jan 1999 | A |
5872669 | Morehouse et al. | Feb 1999 | A |
5875479 | Blount et al. | Feb 1999 | A |
5911779 | Stallmo et al. | Jun 1999 | A |
5949970 | Sipple et al. | Sep 1999 | A |
5961613 | DeNicola | Oct 1999 | A |
5963971 | Fosler et al. | Oct 1999 | A |
5974424 | Schmuck et al. | Oct 1999 | A |
6021408 | Ledain et al. | Feb 2000 | A |
6023709 | Anglin et al. | Feb 2000 | A |
6029179 | Kishi | Feb 2000 | A |
6041329 | Kishi | Mar 2000 | A |
6044442 | Jesionowski | Mar 2000 | A |
6049848 | Yates et al. | Apr 2000 | A |
6061309 | Gallo et al. | May 2000 | A |
6067587 | Miller et al. | May 2000 | A |
6070224 | LeCrone et al. | May 2000 | A |
6098148 | Carlson | Aug 2000 | A |
6128698 | Georgis | Oct 2000 | A |
6131142 | Kamo et al. | Oct 2000 | A |
6131148 | West et al. | Oct 2000 | A |
6163856 | Dion et al. | Dec 2000 | A |
6173359 | Carlson et al. | Jan 2001 | B1 |
6195730 | West | Feb 2001 | B1 |
6225709 | Nakajima | May 2001 | B1 |
6247096 | Fisher et al. | Jun 2001 | B1 |
6260110 | LeCrone et al. | Jul 2001 | B1 |
6266784 | Hsiao et al. | Jul 2001 | B1 |
6269423 | Kishi | Jul 2001 | B1 |
6269431 | Dunham | Jul 2001 | B1 |
6282609 | Carlson | Aug 2001 | B1 |
6289425 | Blendermann et al. | Sep 2001 | B1 |
6292889 | Fitzgerald et al. | Sep 2001 | B1 |
6301677 | Squibb | Oct 2001 | B1 |
6304880 | Kishi | Oct 2001 | B1 |
6317814 | Blendermann et al. | Nov 2001 | B1 |
6324497 | Yates et al. | Nov 2001 | B1 |
6327418 | Barton | Dec 2001 | B1 |
6336163 | Brewer et al. | Jan 2002 | B1 |
6336173 | Day et al. | Jan 2002 | B1 |
6339778 | Kishi | Jan 2002 | B1 |
6341329 | LeCrone et al. | Jan 2002 | B1 |
6343342 | Carlson | Jan 2002 | B1 |
6353837 | Blumenau | Mar 2002 | B1 |
6360232 | Brewer et al. | Mar 2002 | B1 |
6389503 | Georgis et al. | May 2002 | B1 |
6408359 | Ito et al. | Jun 2002 | B1 |
6487561 | Ofek et al. | Nov 2002 | B1 |
6496791 | Yates et al. | Dec 2002 | B1 |
6499026 | Rivette et al. | Dec 2002 | B1 |
6557073 | Fujiwara | Apr 2003 | B1 |
6557089 | Reed et al. | Apr 2003 | B1 |
6578120 | Crockett et al. | Jun 2003 | B1 |
6615365 | Jenevein et al. | Sep 2003 | B1 |
6625704 | Winokur | Sep 2003 | B2 |
6654912 | Viswanathan et al. | Nov 2003 | B1 |
6658435 | McCall | Dec 2003 | B1 |
6694447 | Leach et al. | Feb 2004 | B1 |
6725331 | Kedem | Apr 2004 | B1 |
6766520 | Rieschl et al. | Jul 2004 | B1 |
6779057 | Masters et al. | Aug 2004 | B2 |
6779058 | Kishi et al. | Aug 2004 | B2 |
6779081 | Arakawa et al. | Aug 2004 | B2 |
6816941 | Carlson et al. | Nov 2004 | B1 |
6816942 | Okada et al. | Nov 2004 | B2 |
6834324 | Wood | Dec 2004 | B1 |
6850964 | Brough et al. | Feb 2005 | B1 |
6877016 | Hart et al. | Apr 2005 | B1 |
6915397 | Lubbers et al. | Jul 2005 | B2 |
6931557 | Togawa | Aug 2005 | B2 |
6950263 | Suzuki et al. | Sep 2005 | B2 |
6973369 | Trimmer et al. | Dec 2005 | B2 |
6973534 | Dawson | Dec 2005 | B2 |
6978325 | Gibble | Dec 2005 | B2 |
7032126 | Zalewski et al. | Apr 2006 | B2 |
7055009 | Factor et al. | May 2006 | B2 |
7072910 | Kahn et al. | Jul 2006 | B2 |
7096331 | Haase et al. | Aug 2006 | B1 |
7100089 | Phelps | Aug 2006 | B1 |
7111136 | Yamagami | Sep 2006 | B2 |
7127388 | Yates et al. | Oct 2006 | B2 |
7127577 | Koning et al. | Oct 2006 | B2 |
7152077 | Veitch et al. | Dec 2006 | B2 |
7155586 | Wagner et al. | Dec 2006 | B1 |
20010047447 | Katsuda | Nov 2001 | A1 |
20020004835 | Yarbrough | Jan 2002 | A1 |
20020016827 | McCabe et al. | Feb 2002 | A1 |
20020026595 | Saitou et al. | Feb 2002 | A1 |
20020095557 | Constable et al. | Jul 2002 | A1 |
20020144057 | Li et al. | Oct 2002 | A1 |
20020163760 | Lindsay et al. | Nov 2002 | A1 |
20020166079 | Ulrich et al. | Nov 2002 | A1 |
20020199129 | Bohrer et al. | Dec 2002 | A1 |
20030004980 | Kishi et al. | Jan 2003 | A1 |
20030037211 | Winokur | Feb 2003 | A1 |
20030120476 | Yates et al. | Jun 2003 | A1 |
20030120676 | Holavanahalli et al. | Jun 2003 | A1 |
20030126388 | Yamagami | Jul 2003 | A1 |
20030135672 | Yip et al. | Jul 2003 | A1 |
20030149700 | Bolt | Aug 2003 | A1 |
20030182301 | Patterson et al. | Sep 2003 | A1 |
20030182350 | Dewey | Sep 2003 | A1 |
20030188208 | Fung | Oct 2003 | A1 |
20030217077 | Schwartz et al. | Nov 2003 | A1 |
20030225800 | Kavuri | Dec 2003 | A1 |
20040015731 | Chu et al. | Jan 2004 | A1 |
20040098244 | Dailey et al. | May 2004 | A1 |
20040181388 | Yip et al. | Sep 2004 | A1 |
20040181707 | Fujibayashi | Sep 2004 | A1 |
20050010529 | Zalewski et al. | Jan 2005 | A1 |
20050044162 | Liang et al. | Feb 2005 | A1 |
20050063374 | Rowan et al. | Mar 2005 | A1 |
20050065962 | Rowan et al. | Mar 2005 | A1 |
20050066118 | Perry et al. | Mar 2005 | A1 |
20050066222 | Rowan et al. | Mar 2005 | A1 |
20050066225 | Rowan et al. | Mar 2005 | A1 |
20050076070 | Mikami | Apr 2005 | A1 |
20050076261 | Rowan et al. | Apr 2005 | A1 |
20050076262 | Rowan et al. | Apr 2005 | A1 |
20050076264 | Rowan et al. | Apr 2005 | A1 |
20050144407 | Colgrove et al. | Jun 2005 | A1 |
20060047895 | Rowan et al. | Mar 2006 | A1 |
20060047902 | Passerini | Mar 2006 | A1 |
20060047903 | Passerini | Mar 2006 | A1 |
20060047905 | Matze et al. | Mar 2006 | A1 |
20060047925 | Passerini | Mar 2006 | A1 |
20060047989 | Delgado et al. | Mar 2006 | A1 |
20060047998 | Darcy | Mar 2006 | A1 |
20060047999 | Passerini et al. | Mar 2006 | A1 |
20060143376 | Matze et al. | Jun 2006 | A1 |
Number | Date | Country |
---|---|---|
1333379 | Apr 2006 | EP |
1 671 231 | Jun 2006 | EP |
1 671231 | Jun 2006 | EP |
WO9903098 | Jan 1999 | WO |
WO9906912 | Feb 1999 | WO |
WO2005031576 | Apr 2005 | WO |
WO2006023990 | Mar 2006 | WO |
WO2006023991 | Mar 2006 | WO |
WO2006023992 | Mar 2006 | WO |
WO2006023993 | Mar 2006 | WO |
WO2006023994 | Mar 2006 | WO |
WO2006023995 | Mar 2006 | WO |
Number | Date | Country | |
---|---|---|---|
20050171979 A1 | Aug 2005 | US |
Number | Date | Country | |
---|---|---|---|
60541626 | Feb 2004 | US | |
60542011 | Feb 2004 | US |