1. Technical Field
The present invention relates generally to enterprise data protection and data management.
2. Background of the Related Art
Techniques for managing data history in distributed computing systems are known in the art. In particular, traditional content management systems typically manage file history by using either “forward delta” management, “reverse delta” management, or a combination of both techniques. A forward delta management system maintains an initial baseline of the file as well as a list of deltas (changes to the file) that occur after the baseline is created. In a forward delta management system, deltas are appended to a delta document sequentially. An advantage of such a system is that, as deltas arrive, the system only needs to append them to an end of a delta document. However, when a user tries to access a file (or when a host needs to recover its lost data to a specific point-in-time, version, or the most current point-in-time), the forward delta management system must (at runtime) take the baseline and apply the necessary delta strings “on the fly” to generate the requested point-in-time data. If there is a long list of delta strings, the read latency of such an operation may be very long; in addition, the cache required to process the delta strings during the read operation may be unacceptably high.
A reverse delta management system maintains the most current point-in-time data and a list of reverse deltas (an “undo” list) in a delta management file. A reverse delta management system first takes a given forward delta and applies the delta to last point-in-time data to generate the most current point-in-time data; it then uses the most current point-in-time data to compare with the last point-in-time data to generate an undo (reverse) delta. This type of system only keeps the most current data file and a list of undo deltas. If the most current data is requested, the data can be retrieved instantly. If, however, data from a previous point-in-time is requested, this system must take the most current data file and apply the necessary undo delta(s) to generate the requested point-in-time data. The baseline copy in this system is the most current point-in-time copy. In many cases, there may be a significant read latency for previous data. In addition, the computing power needed for ongoing data updates in such a data management system is very significant. This technique also does not support data replication over an unreliable network, as the baseline copy of the data is constantly changing.
When performing incremental data protection, traditional data management systems copy the entire contents of a changed file into a protection repository, where the file history is saved. These systems, however, do not apply any delta management techniques, such as those described above, to manage the file history. Morever, because these systems are not storage and bandwidth efficient, they are not suitable for performing real-time data services.
The traditional content management systems can manage file history, but they are not capable of managing unstructured and dynamic data. Further, a traditional system of this type requires that its data source be well-structured, i.e., having directories that are created and configured in advance. In most cases, a given content management system is designed to manage a specific content type as opposed to dynamic data. Thus, for example, a given source control system may be designed to manage design documents or source code, but that same system cannot manage data that changes constantly. These systems also are not capable of protecting changing data in real-time. To the extent they include delta management schemes, such schemes do not enable efficient any-point-in-time data recovery.
There remains a need in the art to provide distributed data management systems that can efficiently manage real-time history of a large amount of unstructured and dynamic data with minimal storage and bandwidth usage.
There also remains a need in the art to provide such a distributed data management system that can perform virtual-on-demand recovery of consistent data at any point-in-time in the past.
The present invention addresses these deficiencies in the art.
It is a general object of the present invention to provide for efficient transfer of real-time data changes over a local and/or wide area network.
It is also a general object of the invention to provide for efficient storage of data history over a given time period.
It is a more specific object of the present invention to provide novel data reduction techniques that facilitate any-point-in-time virtual on-demand data recovery in a data management system.
A specific object of the invention is to implement an improved “forward” delta data management technique wherein a “sparse” index is associated with a delta file to achieve both delta management efficiency and to eliminate read latency while accessing history data of any point-in-time.
Another more specific object of the present invention is to provide a novel data management technique to create a given data structure for use in managing data history for a file that is constantly changing. According to the invention, the given data structure need not include the actual contents of later-created versions of a particular file as that file is updated in the system. Nevertheless, the information in the given data structure is such that each of the given versions can be readily reconstructed “on-the-fly” (as-needed) without extensive read latency to apply deltas to the baseline data.
The present invention may be implemented advantageously in a data management system or “DMS” that provides a wide range of data services to data sources associated with a set of application host servers. The data management system typically comprises one or more regions, with each region having one or more clusters. A given cluster has one or more nodes that share storage. To facilitate a given data service, a host driver embedded in an application server connects an application and its data to a cluster. The host driver captures real-time data transactions, preferably in the form of an event journal that is provided to the data management system. In particular, the driver functions to translate traditional file/database/block I/O into a continuous, application-aware, output data stream. Application aware event journaling is a technique to create real-time data capture so that, among other things, consistent data checkpoints of an application can be identified and metadata can be extracted. Application aware event journaling tracks granular application consistent checkpoints. Thus, when a DMS is used to provide file system data protection to a given data source, it is capable of reconstructing an application data state to a consistent point-in-time in the past.
According to an illustrative embodiment, a given application aware data stream is processed through a multi-stage data reduction process to produce a compact data representation from which an “any point-in-time” reconstruction of the original data can be made.
The foregoing has outlined some of the more pertinent features of the invention. These features should be construed to be merely illustrative. Many other beneficial results can be attained by applying the disclosed invention in a different manner or by modifying the invention as will be described.
For a more complete understanding of the present invention and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
As illustrated in commonly-owned, U.S. Pat. No. 7,565,661, issued Jul. 21, 2009, a “host driver” 128 is associated with one or more of the application(s) running in the application servers 116 to transparently and efficiently capture the real-time, continuous history of all (or substantially all) transactions and changes to data associated with such application(s) across the enterprise network. As will be described below, this facilitates real-time, so-called “application aware” protection, with substantially no data loss, to provide continuous data protection and other data services including, without limitation, data distribution, data replication, data copy, data access, and the like. In operation, a given host driver 128 intercepts data events between an application and its primary data storage, and it may also receive data and application events directly from the application and database. In a representative embodiment, the host driver 128 is embedded in the host application server 116 where the application resides; alternatively, the host driver is embedded in the network on the application data path. By intercepting data through the application, fine grain (but opaque) data is captured to facilitate the data service(s). To this end, and as also illustrated in
Referring now to
The DMS provides these and other business continuity data services in real-time with data and application awareness to ensure continuous application data consistency and to allow for fine grain data access and recovery. To offer such application and data aware services, the DMS has the capability to capture fine grain and consistent data. As will be illustrated and described, a given DMS host driver uses an I/O filter to intercept data events between an application and its primary data storage. The host driver also receives data and application events directly from the application and database.
Referring now to
In this embodiment, a host server embedded host driver is used for illustrating the driver behavior. In particular, the host driver 500 in a host server connects to one of the DMS nodes in a DMS cluster (in a DMS region) to perform or facilitate a data service. The host driver preferably includes two logical subsystems, namely, an I/O filter 502, and at least one data agent 504. An illustrative data agent 504 preferably includes one or more modules, namely, an application module 506, a database module 508, an I/O module 510, and an event processor or event processing engine 512. The application module 506 is configured with an application 514, one or more network devices and/or the host system itself to receive application level events 516. These events include, without limitation, entry or deletion of some critical data, installation or upgrade of application software or the operating system, a system alert, detecting of a virus, an administrator generated checkpoint, and so on. One or more application events are queued for processing into an event queue 518 inside or otherwise associated with the data agent. The event processor 512 over time may instruct the application module 506 to re-configure with its event source to capture different application level events.
If an application saves its data into a database, then a database module 508 is available for use. The database module 508 preferably registers with a database 520 to obtain notifications from a database. The module 508 also may integrate with the database 520 through one or more database triggers, or it may also instruct the database 520 to generate a checkpoint 522. The database module 508 also may lock the database 520 (or issue a specific API) to force a database manager (not shown) to flush out its data from memory to disk, thereby generating a consistent disk image (a binary table checkpoint). This process of locking a database is also known as “quiescing” the database. An alternative to quiescing a database is to set the database into a warm backup mode. After a consistent image is generated, the database module 508 then lifts a lock to release the database from its quiescent state. The database events preferably are also queued for processing into the event queue 518. Generalizing, database events include, without limitation, a database checkpoint, specific database requests (such as schema changes or other requests), access failure, and so on. As with application module, the event processor 512 may be used to re-configure the events that will be captured by the database module.
The I/O module 510 instructs the I/O filter 502 to capture a set of one or more I/O events that are of interest to the data agent. For example, a given I/O module 510 may control the filter to capture I/O events synchronously, or the module 510 may control the filter to only capture several successful post I/O events. When the I/O module 510 receives I/O events 524, it forwards the I/O events to the event queue 518 for processing. The event processor 512 may also be used to re-configure the I/O module 510 and, thus, the I/O filter 502.
The event processor 512 functions to generate an application aware, real-time event journal (in effect, a continuous stream) for use by one or more DMS nodes to provide one or more data services. Application aware event journaling is a technique to create real-time data capture so that, among other things, consistent data checkpoints of an application can be identified and metadata can be extracted. For example, application awareness is the ability to distinguish a file from a directory, a journal file from a control or binary raw data file, or to know how a file or a directory object is modified by a given application. Thus, when protecting a general purpose file server, an application aware solution is capable of distinguishing a file from a directory, and of identifying a consistent file checkpoint (e.g., zero-buffered write, flush or close events), and of interpreting and capturing file system object attributes such as an access control list. By interpreting file system attributes, an application aware data protection may ignore activities applied to a temporary file. In general, application aware event journaling tracks granular application consistent checkpoints; thus, when used in conjunction with data protection, the event journal is useful in reconstructing an application data state to a consistent point-in-time in the past, and it also capable of retrieving a granular object in the past without having to recover an entire data volume. In the DMS, data protection typically begins with an initial upload phase, when a full copy of a host data source is uploaded to a DMS cluster. During and after the upload is completed, application(s) may continue to update the data, in which case event journals are forwarded to the DMS as data is modified. Further details of the event journaling technique are described in commonly-owned, U.S. Pat. No. 7,565,661, issued Jul. 21, 2009, which is incorporated herein by reference.
With the above as background, the multi-stage data reduction process of the present invention can now be described. A preferred multi-stage data reduction has a first stage, and a second stage. Typically, a first-stage data reduction takes place at a given host driver, whereas a second-stage data reduction takes place at a given DMS node of a given DMS cluster at which the first-stage data is delivered initially. This approach (which is not to be taken by way of limitation) is illustrated diagrammatically in
As data is changed in the protected host server, a new version of the data is created. This version, however, need not actually be stored in the DMS cluster, as will now be seen with reference to
By structuring the data history object in the manner illustrated in
As an example, if a user file is 10K bytes in length but the update involves just 2 bytes, a typical application would write an entire file locally; in the DMS, however, only the associated new metadata (which includes the new sparse index) is written to disk along with the second-stage delta string (that represents the 2 bytes). As additional updates occur, each subsequent new version is managed in the same way, i.e., without storing (in DMS) the actual binary content of the update and with only the simple creation of new metadata (including the new sparse index) and additional sequencing of the dfile. When it comes time to reconstruct a given version, the layout of the flat file (with the metadata version blocks preferably reverse ordered) provides for efficient file read operation. In particular, during the read, the actual data bytes are located using the sparse indices (of that version), which point to information in the bfile and dfile as needed. The information in the bfile and dfile is then used to create the version under reconstruction.
Thus, according to the present invention, a given version (an updated file) need not be stored in the DMS cluster; rather, as long as the bfile, the dfile and the sparse index (for that version) exist, the actual contents of the version can be reconstructed efficiently and reliably.
As noted above, preferably the first-stage data reduction uses a signature-based algorithm to extract changed data ranges instead of comparing the current changes to the previous data version. This operation minimizes both bandwidth utilization and storage overhead. A convenient algorithm to perform the first-stage data reduction operation is Rsync, which is available as an open source implementation from several online locations, e.g., http://samba.anu.edu.au/rsync/. In an alternative embodiment, or if bandwidth is not a concern, the first-stage data reduction can operate by using any delta differencing algorithm that merely compares the current changes to the previous data version. More generally, any known or later-developed checksum-based delta extraction algorithm may be used.
As noted above, an important goal of the present invention is to reduce significantly the amount of storage required for storing data history in an environment where data is consistently changing and the data must be available over a wide area. As will be seen, this goal is achieved by the present invention through the combination of the first-stage and second-stage data reduction, especially where the latter data reduction step is associated with a sparse indexing technique. This multi-stage data reduction ensures that only minimal storage is required for storing data history and that only minimal wide-area-network bandwidth is required for distribution and replication.
The first and second stage data reduction is now illustrated. In an illustrated embodiment, each version of a binary object (such as a file or a database volume) in the DMS has an associated sparse index in their version metadata defined by the following syntax:
The following table describes a representative delta string syntax that may be implemented to generate the first and second stage delta strings according to the present invention:
The above-described syntax should not be taken to limit the present invention. Any syntax that defines given data insertions, deletions, replacements or other data comparison operations may be used.
The application of the above-identified syntax according to the present invention can be illustrated by way of example. Assume that the original data range stored in the host server (e.g, cluster 644 of
By way of example only, the following chart assumes that each content character in the example represents 100 bytes and that the signature-block size used by the first-stage data reduction checksum based algorithm is 400 bytes. This means that a checksum is generated for each 400 bytes of data. It is also assumed that each delta string symbols (+, −, R) representation is 1 byte and that the offset and length are 4 byte numbers.
As can be seen, the delta file (dfile) is a string into which the second stage delta reduction strings are concatenated as new versions are created. This is a forward delta encoding format. Stated another way, the delta file becomes a “composite” string (or stream) over time, with highly compact encoding. In this example, after Version 3 has been generated, the delta file is a composite of the two (2) second stage delta strings, viz., +400200xx|R200200300yyz.
As can also be seen, a sparse index associated with a given version is a byte range description of the particular version of the file (i.e., the version that exists at a given point in time). Stated another way, the delta file and the associated sparse index enable the system to determine byte level contents of each version of the file at any given point-in-time. Thus, the encoding techniques described by the present invention facilitate any point-in-time “on-demand” recovery of the data.
The above-described examples show one delta string being produced for each version change. This is not a limitation. In practice, typically a new file update may result in one or more delta strings being generated. In addition, the number of first stage delta strings need not be the same as the number of second stage delta strings for a given update.
Thus, in the DMS cluster (and in this example), Version 2 generates a first stage delta string of +400 200 “xx,” which indicates that the data “xx” is of length 400 and is inserted at a given offset 200. The second stage delta string has a similar value, as typically an “insertion” does not reduce the size of the first stage delta string. As can be seen, the Version 2 sparse index corresponds to the Version 2 content (with “+” being one byte, and both 400 and 200 being 4 byte numbers). In particular, the Version 2 sparse index identifies that the first four character positions (byte range 0-399) of the Version 2 content are found in the original binary file (bfile); that the next 2 characters (byte range 400-599) of the Version 2 content are found in the delta file for this version at offset “9” (in this encoding “+” is represented as 1 byte and both “400” and “200” are represented as 4 byte numbers so that +400200 represents “9”); and that the final four character positions (byte range 600-999) of the Version 2 content are found in the original binary file (bfile) in the final four character positions of that file. Thus, as can be seen, the sparse index provides byte level descriptions from which the actual data comprising the Version 2 content can be reconstructed.
With Version 3, the first stage delta string reflects a replace function R, in this case that the new data (aayyz) is of length 500 and is replacing old data (aaaa) of length 400 at a given offset (0) (at the front of the binary file). The second stage delta string is then generated by comparing the first stage delta string R 0 400 500 “aayyz” with the original binary string to create a further reduced string, in this case a string that reflects that new data (yyz) is of length 300 and is replacing old data (aa) of length 200 at a given offset (200). Once again, the Version 3 sparse index provides the byte range descriptions of the Version 3 content. Thus, the first two characters (byte range 0-199) are from the original binary file at the positions indicated, the next three characters (byte range 200-499) are identified from the composite delta file (dfile) at the identified offset, the next two characters (byte range 500-699) are identified from the composite delta file at the identified offset, and then the final characters (byte range 700-1099) are identified from the original binary file as indicated.
As can be seen, in the host server a large portion of a file or a database may be modified, while at the DMS typically only a small amount of data is written to the storage. Moreover, as noted above, the data written to storage is typically just new metadata (including the sparse index for the version) and a new sequence (the one or more second-stage delta strings) appended to the dfile. To generate a new sparse index, only the last version of the sparse index needs to be retrieved and modified according to the semantics indicated in the new second-stage delta string(s). As compared to the host server, however, only a very small amount of storage and I/O bandwidth is used in the DMS.
Of course, the above-identified description is merely representative of the desired encoding properties provided by the second stage delta string and the associated sparse index. The specific examples are not meant to limit the present invention in any way.
The following table illustrates several additional examples of how the delta string syntax is used to generate representative first stage and second stage delta strings:
Once configured, the Delta1-handle continues to accept requests from the host driver. This is step 714. At step 716, a test is performed to determine the request type. If the request type is a WRITE request (a request to accumulate updated data) in the form of WRITE (offset, length, data), the routine branches to step 718. At this step, any data changes to the version are accumulated. If the request type is SIGNATURE request (a request to accumulate the signatures from the last version for delta computation in the form of SIGNATURE (blockOffset, blockRange, arrayofSignatures), the routine branches to step 720. At this step, the signatures of the previous data version that are relevant to the changes are accumulated. As will be described in more detail below, the host driver can determine if the needed signatures are available with the delta1-handle; if not, the host driver preferably obtains the signature from the DMS core. Once changes are completed (typically upon a checkpoint event) and the relevant signatures of the last version are acquired, the routine performs a COMPUTE function at step 722 using the data changes and the signatures as needed. This generates the first stage delta string. Once the first-stage delta strings are generated, the host driver can forward those delta strings to the DMS core and terminate the process, which is indicated by step 724.
If the outcome of the test at step 820 indicates that the event type is XDMP, the routine performs a test at step 838 to determine if the response corresponds to a request for signatures and if signature(s) are available. If yes, the routine branches to step 839 to put the signature(s) into the Delta1-handle (as in step 718 in
The use of forward delta encoding in combination with the sparse index provide significant advantages of the prior art. In particular, the multi-stage delta reduction as has been described significantly reduces the amount of storage required for storing data history. As is known, a forward delta management system maintains an initial baseline of the file, as well as a list of deltas that come after the baseline. In a forward delta management system, deltas are always appended at a delta document sequentially. The advantage of such system is that, as deltas arrived, the system needs only to append the deltas to the tail end of a delta document. As applied in the present invention, a given DMS node can simply append the delta strings to a delta file as it provides a data protection service. The DMS system can also transfer the delta strings to other DMS nodes or external hosts to provide a data replication service. When a user tries to access a file (or when a host needs to recover its lost data at a specific point-in-time or the most current point-in-time), the DMS node must, at runtime, take the baseline of the file and then apply the necessary delta strings “on the fly” to generate the requested point-in-time data. As mentioned above, this process is quite difficult to accomplish in an efficient manner in the prior art because read latency is very high as the number of delta strings increases. The present invention, however, solves this problem by providing the associated sparse index. By using the sparse index, the DMS can identify the exact contents of a particular version of the file at any given point-in-time in a computationally-efficient manner. Moreover, because the invention uses significantly lower I/O bandwidth at the DMS (as compared to the I/O bandwidth requirements of the corresponding update at the host server), many servers can be protected (by DMS) concurrently. The present invention also allows the DMS nodes to perform data replication over local or wide area networks with minimal bandwidth.
Each of the first and second stage data reduction modules as described above are conveniently implemented as computer software, i.e., a set of program instructions and associated data structures. This is not a requirement of the invention, as the functionality described above (or any part thereof) may be implemented in firmware, in general purpose or special-purpose hardware, or in combinations of hardware and software.
While a multi-stage data reduction approach is desirable, this is not a limitation of the present invention. In an alternative embodiment, there are no second-stage delta strings, in which case only the first-stage delta strings are maintained with the sparse index. In yet another alternative embodiment, changes collected from the host are sent to the DMS without performing first-stage delta reduction, in which case the changes are compared against the previous data version in the DMS to generate delta strings and the sparse index.
Yet other variants are also within the scope of the present invention. Thus, while the delta file (dfile) has been described as a concatenation of the second stage delta strings (i.e., a stream), this is not necessarily a requirement of the invention. The second stage delta strings may be stored separately or otherwise disassociated from one another if desired.
While the present invention has been described in the context of a method or process, the present invention also relates to apparatus for performing the operations herein. As described above, this apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including an optical disk, a CD-ROM, a magnetic-optical disk, a read-only memory (ROM), a random access memory (RAM), a magnetic or optical card, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
While the above written description also describes a particular order of operations performed by certain embodiments of the invention, it should be understood that such order is exemplary, as alternative embodiments may perform the operations in a different order, combine certain operations, overlap certain operations, or the like. References in the specification to a given embodiment indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic.
While given components of the system have been described separately, one of ordinary skill also will appreciate that some of the functions may be combined or shared in given instructions, program sequences, code portions, and the like.
One of ordinary skill will also appreciate that the techniques of the present invention can be implemented in any data storage device or system, or across sets of such devices or systems. More generally, the present invention can be applied on a file system, on a raw volume, or with respect to any storage devices in which any logical or physical data structures are used.
Having described my invention, what I now claim is as follows.
This patent application is a continuation of U.S. patent application Ser. No. 10/943,541, filed on Sep. 17, 2004, now U.S. Pat. No. 7,979,404. This patent application is related to commonly-owned U.S. Pat. No. 7,096,392, issued Aug. 22, 2006. This patent application is related to commonly owned U.S. Pat. No. 7,565,661, issued Jul. 21, 2009. U.S. patent application Ser. No. 10/943,541 and U.S. Pat. No. 7,565,661 are incorporated herein by reference.
| Number | Name | Date | Kind |
|---|---|---|---|
| 3555184 | Townley | Jan 1971 | A |
| 3555195 | Rester et al. | Jan 1971 | A |
| 3555204 | Braun | Jan 1971 | A |
| 3555251 | Shavit | Jan 1971 | A |
| 3648250 | Low et al. | Mar 1972 | A |
| 4162536 | Morley | Jul 1979 | A |
| 4402045 | Krol | Aug 1983 | A |
| 4415792 | Jordan | Nov 1983 | A |
| 4450556 | Boleda et al. | May 1984 | A |
| 4451108 | Skidmore | May 1984 | A |
| 4455483 | Schonhuber | Jun 1984 | A |
| 4502082 | Ragle et al. | Feb 1985 | A |
| 4512020 | Krol et al. | Apr 1985 | A |
| 4796260 | Schilling et al. | Jan 1989 | A |
| 4882737 | Dzung | Nov 1989 | A |
| 4916450 | Davis | Apr 1990 | A |
| 4972474 | Sabin | Nov 1990 | A |
| 5005197 | Parsons et al. | Apr 1991 | A |
| 5148479 | Bird et al. | Sep 1992 | A |
| 5177796 | Feig et al. | Jan 1993 | A |
| 5224212 | Rosenthal et al. | Jun 1993 | A |
| 5274508 | Tan et al. | Dec 1993 | A |
| 5280584 | Caesar et al. | Jan 1994 | A |
| 5287504 | Carpenter et al. | Feb 1994 | A |
| 5303393 | Noreen et al. | Apr 1994 | A |
| 5305326 | Solomon et al. | Apr 1994 | A |
| 5311197 | Sorden et al. | May 1994 | A |
| 5319395 | Larky et al. | Jun 1994 | A |
| 5321699 | Endoh et al. | Jun 1994 | A |
| 5363371 | Roy et al. | Nov 1994 | A |
| 5365516 | Jandrell | Nov 1994 | A |
| 5373372 | Loewen | Dec 1994 | A |
| 5377102 | Nishiishigaki | Dec 1994 | A |
| 5382508 | Ikenoue | Jan 1995 | A |
| 5386422 | Endoh et al. | Jan 1995 | A |
| 5387994 | McCormack et al. | Feb 1995 | A |
| 5388074 | Buckenmaier | Feb 1995 | A |
| 5392209 | Eason et al. | Feb 1995 | A |
| 5396600 | Thompson et al. | Mar 1995 | A |
| 5416831 | Chewning, III et al. | May 1995 | A |
| 5424778 | Sugiyama et al. | Jun 1995 | A |
| 5430830 | Frank et al. | Jul 1995 | A |
| 5440686 | Dahman et al. | Aug 1995 | A |
| 5469444 | Endoh et al. | Nov 1995 | A |
| 5477492 | Ohsaki et al. | Dec 1995 | A |
| 5479654 | Squibb | Dec 1995 | A |
| 5481531 | Yamamuro | Jan 1996 | A |
| 5499512 | Jurewicz et al. | Mar 1996 | A |
| 5502491 | Sugiyama et al. | Mar 1996 | A |
| 5506965 | Naoe | Apr 1996 | A |
| 5507024 | Richards, Jr. | Apr 1996 | A |
| 5511212 | Rockoff | Apr 1996 | A |
| 5526357 | Jandrell | Jun 1996 | A |
| 5537945 | Sugihara et al. | Jul 1996 | A |
| 5560033 | Doherty et al. | Sep 1996 | A |
| 5561671 | Akiyama | Oct 1996 | A |
| 5583975 | Naka et al. | Dec 1996 | A |
| 5602638 | Boulware | Feb 1997 | A |
| 5606601 | Witzman et al. | Feb 1997 | A |
| 5640159 | Furlan et al. | Jun 1997 | A |
| 5644763 | Roy | Jul 1997 | A |
| 5651129 | Yokote et al. | Jul 1997 | A |
| 5657398 | Guilak | Aug 1997 | A |
| 5678042 | Pisello et al. | Oct 1997 | A |
| 5684536 | Sugiyama et al. | Nov 1997 | A |
| 5684693 | Li | Nov 1997 | A |
| 5684774 | Yamamuro | Nov 1997 | A |
| 5724241 | Wood et al. | Mar 1998 | A |
| 5729743 | Squibb | Mar 1998 | A |
| 5737399 | Witzman et al. | Apr 1998 | A |
| 5742509 | Goldberg et al. | Apr 1998 | A |
| 5742915 | Stafford | Apr 1998 | A |
| 5754772 | Leaf | May 1998 | A |
| 5764691 | Hennedy et al. | Jun 1998 | A |
| 5768159 | Belkadi et al. | Jun 1998 | A |
| 5778370 | Emerson | Jul 1998 | A |
| 5781612 | Choi et al. | Jul 1998 | A |
| 5784366 | Apelewicz | Jul 1998 | A |
| 5794252 | Bailey et al. | Aug 1998 | A |
| 5805155 | Allibhoy et al. | Sep 1998 | A |
| 5812130 | Van Huben et al. | Sep 1998 | A |
| RE35920 | Sorden et al. | Oct 1998 | E |
| 5819020 | Beeler, Jr. | Oct 1998 | A |
| 5822749 | Agarwal | Oct 1998 | A |
| 5826265 | Van Huben et al. | Oct 1998 | A |
| 5831903 | Ohuchi et al. | Nov 1998 | A |
| 5841717 | Yamaguchi | Nov 1998 | A |
| 5841771 | Irwin et al. | Nov 1998 | A |
| 5848072 | Prill et al. | Dec 1998 | A |
| 5854834 | Gottlieb et al. | Dec 1998 | A |
| 5862136 | Irwin | Jan 1999 | A |
| 5864875 | Van Huben et al. | Jan 1999 | A |
| 5877742 | Klink | Mar 1999 | A |
| 5878408 | Van Huben et al. | Mar 1999 | A |
| 5893119 | Squibb | Apr 1999 | A |
| 5894494 | Davidovici | Apr 1999 | A |
| 5909435 | Apelewicz | Jun 1999 | A |
| 5917429 | Otis, Jr. et al. | Jun 1999 | A |
| 5918248 | Newell et al. | Jun 1999 | A |
| 5920867 | Van Huben et al. | Jul 1999 | A |
| 5920873 | Van Huben et al. | Jul 1999 | A |
| 5928327 | Wang et al. | Jul 1999 | A |
| 5930732 | Domanik et al. | Jul 1999 | A |
| 5930762 | Masch | Jul 1999 | A |
| 5931928 | Brennan et al. | Aug 1999 | A |
| 5937168 | Anderson et al. | Aug 1999 | A |
| 5940823 | Schreiber et al. | Aug 1999 | A |
| 5950201 | Van Huben et al. | Sep 1999 | A |
| 5953729 | Cabrera et al. | Sep 1999 | A |
| 5958010 | Agarwal et al. | Sep 1999 | A |
| 5966707 | Van Huben et al. | Oct 1999 | A |
| 5974563 | Beeler, Jr. | Oct 1999 | A |
| 5980096 | Thalhammer-Reyero | Nov 1999 | A |
| 5999562 | Hennedy et al. | Dec 1999 | A |
| 6005846 | Best et al. | Dec 1999 | A |
| 6005860 | Anderson et al. | Dec 1999 | A |
| 6031848 | Brennan | Feb 2000 | A |
| 6035297 | Van Huben et al. | Mar 2000 | A |
| 6047323 | Krause | Apr 2000 | A |
| 6065018 | Beier et al. | May 2000 | A |
| 6072185 | Arai et al. | Jun 2000 | A |
| 6088693 | Van Huben et al. | Jul 2000 | A |
| 6094654 | Van Huben et al. | Jul 2000 | A |
| 6108318 | Kolev et al. | Aug 2000 | A |
| 6108410 | Reding et al. | Aug 2000 | A |
| 6154847 | Schofield et al. | Nov 2000 | A |
| 6158019 | Squibb | Dec 2000 | A |
| 6163856 | Dion et al. | Dec 2000 | A |
| 6178121 | Maruyama | Jan 2001 | B1 |
| 6181609 | Muraoka | Jan 2001 | B1 |
| 6189016 | Cabrera et al. | Feb 2001 | B1 |
| 6237122 | Maki | May 2001 | B1 |
| 6243348 | Goodberlet | Jun 2001 | B1 |
| 6249824 | Henrichs | Jun 2001 | B1 |
| 6366926 | Pohlmann et al. | Apr 2002 | B1 |
| 6366988 | Skiba et al. | Apr 2002 | B1 |
| 6389427 | Faulkner | May 2002 | B1 |
| 6393582 | Klecka et al. | May 2002 | B1 |
| 6397242 | Devine et al. | May 2002 | B1 |
| 6446136 | Pohlmann et al. | Sep 2002 | B1 |
| 6460055 | Midgley et al. | Oct 2002 | B1 |
| 6463565 | Kelly et al. | Oct 2002 | B1 |
| 6487561 | Ofek et al. | Nov 2002 | B1 |
| 6487581 | Spence et al. | Nov 2002 | B1 |
| 6496944 | Hsiao et al. | Dec 2002 | B1 |
| 6502133 | Baulier et al. | Dec 2002 | B1 |
| 6519612 | Howard et al. | Feb 2003 | B1 |
| 6526418 | Midgley et al. | Feb 2003 | B1 |
| 6549916 | Sedlar | Apr 2003 | B1 |
| 6611867 | Bowman-Amuah | Aug 2003 | B1 |
| 6625623 | Midgley et al. | Sep 2003 | B1 |
| 6629109 | Koshisaka | Sep 2003 | B1 |
| 6640145 | Hoffberg | Oct 2003 | B2 |
| 6670974 | McKnight et al. | Dec 2003 | B1 |
| RE38410 | Hersch et al. | Jan 2004 | E |
| 6751753 | Nguyen et al. | Jun 2004 | B2 |
| 6769074 | Vaitzblit | Jul 2004 | B2 |
| 6779003 | Midgley et al. | Aug 2004 | B1 |
| 6785786 | Gold et al. | Aug 2004 | B1 |
| 6807550 | Li et al. | Oct 2004 | B1 |
| 6816872 | Squibb | Nov 2004 | B1 |
| 6823336 | Srinivasan et al. | Nov 2004 | B1 |
| 6826711 | Moulton et al. | Nov 2004 | B2 |
| 6836756 | Gruber | Dec 2004 | B1 |
| 6839721 | Schwols | Jan 2005 | B2 |
| 6839740 | Kiselev | Jan 2005 | B1 |
| 6847984 | Midgley et al. | Jan 2005 | B1 |
| 6907551 | Katagiri et al. | Jun 2005 | B2 |
| 6993706 | Cook | Jan 2006 | B2 |
| 7028078 | Sharma et al. | Apr 2006 | B1 |
| 7039663 | Federwisch et al. | May 2006 | B1 |
| 7054913 | Kiselev | May 2006 | B1 |
| 7069579 | Delpuch | Jun 2006 | B2 |
| 7080081 | Agarwal et al. | Jul 2006 | B2 |
| 7092396 | Lee et al. | Aug 2006 | B2 |
| 7096392 | Sim-Tang | Aug 2006 | B2 |
| 7200233 | Keller et al. | Apr 2007 | B1 |
| 7206805 | McLaughlin, Jr. | Apr 2007 | B1 |
| 7207224 | Rutt et al. | Apr 2007 | B2 |
| 7272613 | Sim et al. | Sep 2007 | B2 |
| 7290056 | McLaughlin, Jr. | Oct 2007 | B1 |
| 7325159 | Stager et al. | Jan 2008 | B2 |
| 7363549 | Sim-Tang | Apr 2008 | B2 |
| 7519870 | Sim-Tang | Apr 2009 | B1 |
| 7565661 | Sim-Tang | Jul 2009 | B2 |
| 7680834 | Sim-Tang | Mar 2010 | B1 |
| 7689602 | Sim-Tang | Mar 2010 | B1 |
| 7788521 | Sim-Tang | Aug 2010 | B1 |
| 7904913 | Sim-Tang et al. | Mar 2011 | B2 |
| 7979404 | Sim-Tang | Jul 2011 | B2 |
| 7979441 | Sim-Tang | Jul 2011 | B2 |
| 8060889 | Sim-Tang | Nov 2011 | B2 |
| 20010029520 | Miyazaki et al. | Oct 2001 | A1 |
| 20010043522 | Park | Nov 2001 | A1 |
| 20010056362 | Hanagan et al. | Dec 2001 | A1 |
| 20020022982 | Cooperstone et al. | Feb 2002 | A1 |
| 20020091722 | Gupta et al. | Jul 2002 | A1 |
| 20020107860 | Gobeille et al. | Aug 2002 | A1 |
| 20020144177 | Kondo et al. | Oct 2002 | A1 |
| 20020147807 | Raguseo | Oct 2002 | A1 |
| 20020172222 | Ullmann et al. | Nov 2002 | A1 |
| 20020178397 | Ueno et al. | Nov 2002 | A1 |
| 20020199152 | Garney et al. | Dec 2002 | A1 |
| 20030004947 | Coverston | Jan 2003 | A1 |
| 20030009552 | Benfield et al. | Jan 2003 | A1 |
| 20030051026 | Carter et al. | Mar 2003 | A1 |
| 20030088372 | Caulfield | May 2003 | A1 |
| 20030117916 | Makela et al. | Jun 2003 | A1 |
| 20030200098 | Geipel et al. | Oct 2003 | A1 |
| 20030204515 | Shadmon et al. | Oct 2003 | A1 |
| 20030225825 | Healey et al. | Dec 2003 | A1 |
| 20040010544 | Slater et al. | Jan 2004 | A1 |
| 20040036716 | Jordahl | Feb 2004 | A1 |
| 20040047354 | Slater et al. | Mar 2004 | A1 |
| 20040080504 | Salesky et al. | Apr 2004 | A1 |
| 20040098458 | Husain et al. | May 2004 | A1 |
| 20040098717 | Husain et al. | May 2004 | A1 |
| 20040098728 | Husain et al. | May 2004 | A1 |
| 20040098729 | Husain et al. | May 2004 | A1 |
| 20040117715 | Ha et al. | Jun 2004 | A1 |
| 20040133487 | Hanagan et al. | Jul 2004 | A1 |
| 20040193594 | Moore et al. | Sep 2004 | A1 |
| 20040199486 | Gopinath et al. | Oct 2004 | A1 |
| 20040250212 | Fish | Dec 2004 | A1 |
| 20050001911 | Suzuki | Jan 2005 | A1 |
| 20050021690 | Peddada | Jan 2005 | A1 |
| 20050076066 | Stakutis et al. | Apr 2005 | A1 |
| 20050166179 | Vronay et al. | Jul 2005 | A1 |
| 20050240592 | Mamou et al. | Oct 2005 | A1 |
| 20050251540 | Sim-Tang | Nov 2005 | A1 |
| 20050262097 | Sim-Tang et al. | Nov 2005 | A1 |
| 20050262188 | Mamou et al. | Nov 2005 | A1 |
| 20050286440 | Strutt et al. | Dec 2005 | A1 |
| 20060020586 | Prompt et al. | Jan 2006 | A1 |
| 20060026220 | Margolus | Feb 2006 | A1 |
| 20060050970 | Gunatilake | Mar 2006 | A1 |
| 20060064416 | Sim-Tang | Mar 2006 | A1 |
| 20060101384 | Sim-Tang et al. | May 2006 | A1 |
| 20060130002 | Hirayama et al. | Jun 2006 | A1 |
| 20060137024 | Kim et al. | Jun 2006 | A1 |
| 20060236149 | Nguyen et al. | Oct 2006 | A1 |
| 20060259820 | Swoboda | Nov 2006 | A1 |
| 20060278004 | Rutt et al. | Dec 2006 | A1 |
| 20070067278 | Borodziewicz et al. | Mar 2007 | A1 |
| 20070094312 | Sim-Tang | Apr 2007 | A1 |
| 20070168692 | Quintiliano | Jul 2007 | A1 |
| 20070214191 | Chandrasekaran | Sep 2007 | A1 |
| 20080256138 | Sim-Tang | Oct 2008 | A1 |
| 20100031274 | Sim-Tang | Feb 2010 | A1 |
| 20100146004 | Sim-Tang | Jun 2010 | A1 |
| 20110185227 | Sim-Tang | Jul 2011 | A1 |
| 20110252004 | Sim-Tang | Oct 2011 | A1 |
| 20110252432 | Sim-Tang et al. | Oct 2011 | A1 |
| Number | Date | Country |
|---|---|---|
| WO-9819262 | May 1998 | WO |
| WO-0225443 | Mar 2002 | WO |
| WO-03060774 | Jul 2003 | WO |
| Number | Date | Country | |
|---|---|---|---|
| 20110252004 A1 | Oct 2011 | US |
| Number | Date | Country | |
|---|---|---|---|
| Parent | 10943541 | Sep 2004 | US |
| Child | 12901824 | US |