A portion of the disclosure of this patent document includes material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyrights whatsoever.
This application was filed on the same day as the following applications: U.S. patent application Ser. No. 11/894,756, entitled “SYSTEMS AND METHODS FOR ADAPTIVE COPY ON WRITE,” and U.S. patent application Ser. No. 11/894,739, entitled “SYSTEMS AND METHODS FOR PORTALS INTO SNAPSHOT DATA,” all of which are hereby incorporated by reference in their entirety herein.
1. Field
The systems and methods disclosed herein relate generally to file systems and more specifically to systems and methods for reading objects in a file system.
2. Description of the Related Art
The amount of data stored on digital computing systems has increased dramatically in recent years. Accordingly, users have become increasingly reliant on the storage devices of these computing systems to safely store this data. In order to preserve a copy of the data in case of loss, many users routinely copy some or all of the contents of the storage devices to a backup or archival storage device.
The data stored on the storage devices may be organized as electronic files in a file system. The files may be grouped into directories, with some directories including other directories and/or files. During a backup process, the system typically traverses some or all of the file system to read individual files for transfer to the backup device. However, problems may occur when reading a file system from the storage devices. For example, if the file system includes a large number of relatively small files, the backup system may be latency bound while waiting for each of the individual files to be read from the storage device. Because of the foregoing challenges and limitations, there is a need to provide systems and methods for reading files in a file system.
In general, embodiments of the invention relate to file systems. More specifically, systems and methods embodying the invention provide support for reading objects such as, for example, files in a file system.
An embodiment of the present invention includes a method of traversing objects in a file system. The method may include traversing a portion of the file system to identify an object to be read and to determine a size representative of the object and determining whether to represent the object in a data structure based at least in part on one or more factors including the size representative of the object and a cumulative size of objects currently represented in the data structure. The method may also include prefetching at least a portion of the objects currently represented in the data structure.
Another embodiment of the present invention includes a computer-readable medium on which are stored executable instructions that, when executed by a processor, cause the processor to perform a method for traversing objects in a file system. The method may comprise traversing a portion of the file system to identify an object to be read and to determine a size representative of the object. The method may also comprise determining whether to represent the object in a data structure based at least in part on one or more factors including the size representative of the object and a cumulative size of objects currently represented in the data structure. The method may further include prefetching at least a portion of the objects currently represented in the data structure.
A further embodiment of the present invention includes a system for managing reading of a portion of a file system. The system may comprise a storage device capable of accessing a file system, a memory operably coupled to the storage device, and a processing module operably coupled to the memory and the storage device. The processing module may comprise a prefetch module, a working module, and a data structure capable of representing files in the file system. The prefetch module may be configured to traverse data related to a portion of the file system and to represent a file in the data structure based at least in part on a size of the file and a cumulative size of files currently represented in the data structure. The prefetch module may be further configured to open the file and to prefetch at least a portion of the file. The working module may be configured to read the files represented in the data structure so as to transfer the files from the storage device to the memory.
For purposes of this summary, certain aspects, advantages, and novel features of the invention are described herein. It is to be understood that not necessarily all such advantages may be achieved in accordance with any particular embodiment of the invention. Thus, for example, those skilled in the art will recognize that the invention may be embodied or carried out in a manner that achieves one advantage or group of advantages as taught herein without necessarily achieving other advantages as may be taught or suggested herein.
These and other features will now be described with reference to the drawings summarized above. The drawings and the associated descriptions are provided to illustrate embodiments of the invention and not to limit the scope of the invention. Throughout the drawings, reference numbers may be reused to indicate correspondence between referenced elements. In addition, the first digit of each reference number generally indicates the figure in which the element first appears.
Systems and methods which represent one embodiment of an example application of the invention will now be described with reference to the drawings. Variations to the systems and methods which represent other embodiments will also be described.
For purposes of illustration, some embodiments will be described in the context of a file system, which may be a distributed file system. The present invention is not limited by the type of environment in which the systems and methods are used, however, and systems and methods may be used in other environments, such as, for example, other file systems, other distributed systems, the Internet, the World Wide Web, a private network for a hospital, a broadcast network for a government agency, and an internal network for a corporate enterprise, an Intranet, a local area network, a wide area network, a wired network, a wireless network, and so forth. Some of the figures and descriptions, however, relate to an embodiment of the invention wherein the environment is that of a distributed file system. It is also recognized that in other embodiments, the systems and methods may be implemented as a single module and/or implemented in conjunction with a variety of other modules and the like. Moreover, the specific implementations described herein are set forth in order to illustrate, and not to limit, the invention. The scope of the invention is defined by the appended claims and their equivalents.
One example of a distributed file system, in which embodiments of systems and methods described herein may be implemented, is described in U.S. patent application Ser. No. 10/007,003 entitled “SYSTEMS AND METHODS FOR PROVIDING A DISTRIBUTED FILE SYSTEM UTILIZING METADATA TO TRACK INFORMATION ABOUT DATA STORED THROUGHOUT THE SYSTEM,” filed Nov. 9, 2001, which claims priority to Application No. 60/309,803 filed Aug. 3, 2001, U.S. Pat. No. 7,146,524 entitled “SYSTEMS AND METHODS FOR PROVIDING A DISTRIBUTED FILE SYSTEM INCORPORATING A VIRTUAL HOT SPARE,” filed Oct. 25, 2002, and U.S. patent application Ser. No. 10/714,326 entitled “SYSTEMS AND METHODS FOR RESTRIPING FILES IN A DISTRIBUTED FILE SYSTEM,” filed Nov. 14, 2003, which claims priority to Application No. 60/426,464, filed Nov. 14, 2002, all of which are hereby incorporated by reference herein in their entirety.
For purposes of illustration, some embodiments will also be described with reference to updating data structures in a file system using information stored in related data structures of the file system. Embodiments of a file system capable of updating data structures with information stored in related data structures of a file system are disclosed in U.S. patent application Ser. No. 11/255,337, titled, “SYSTEMS AND METHODS FOR ACCESSING AND UPDATING DISTRIBUTED DATA,” and is hereby incorporated by reference in its entirety.
For purposes of illustration, certain embodiments of the systems and methods disclosed herein will be described in the example context of backing up a file system to a storage medium. The scope of the disclosure is not limited to file system backups, and in other embodiments, the systems and methods advantageously may be used, for example, for replicating a disk, indexing and/or searching file systems and/or data on a search engine, generating a cryptographic hash function (for example, an md5sum), and so forth. The specific examples described below are set forth to illustrate, and not to limit, various aspects of the disclosure.
I. Overview
In some embodiments, a distributed file system is used to store the file system data. The distributed file system may comprise one or more physical nodes that are configured to intercommunicate via hard-wired connections, via a suitable data network (for example, the Internet), via wireless communications, or by any suitable type of communication as known by those of ordinary skill in the art. In one example, a node of the distributed file system comprises the storage device 110. The archive target 140 may comprise data storage on the same node or on a different node of the distributed file system or may comprise another storage device as discussed above (for example, a tape drive).
In some embodiments, the backup system 100 is configured to transfer a copy of the file system data to a cache 120 before transferring the cached data through a communication medium 130 to the archive target 140. The cache 120 thereby buffers the data waiting to be transferred to the archive target 140 via the communication medium 130. In some embodiments, the cache 120 comprises volatile and/or non-volatile memory with fast data access times. For example, in one embodiment, 1 G RAM cache is used. The communication medium 130 may comprise a wired or wireless communication medium. In some embodiments, the communication medium 130 comprises a data network such as a wide-area network or local-area network, the Internet, or the World-Wide-Web. The communication medium 130 may support communications protocols such as TCP/IP, backup protocols such as NDMP, and/or standards for file access such as NFS or CIFS.
In some backup system embodiments, file system data is read from the storage device 110 to the cache 120 and then the cached data is transferred to the archive target 140. The input/output (I/O) performance of the cache 120 typically is much better than the I/O performance of the storage device 110 (for example, “disk I/O” in
Accordingly, some embodiments of the backup system 100 advantageously may “read ahead” or “prefetch” portions of the file system data before this data is requested by the target archive stream. The file system data may include file data and/or metadata. The prefetched data may be stored on the cache 120 (“cached”) so that it is available when needed for transfer by the communication medium 130 to the archive target 140. Although caching prefetched file system data consumes storage in the cache 120, the caching may improve the performance of the backup system 100 by reducing latency in the disk I/O. Additionally, in certain embodiments, portions of data in each file in a group of files may be prefetched by the backup system 100. The size of the data portions and/or the number of files in the group may be selected so that the network I/O stream does not stall or become latency bound. Such embodiments of the system 100 advantageously may improve the backup performance particularly when the file system includes numerous small files. Further, in certain embodiments, multiple processing threads handle the data and file prefetch and the data transfer to the archive target 140. An advantage of such embodiments is that the prefetch and the data transfer may perform in parallel, rather than in a serial fashion.
In order to more efficiently utilize cache resources, embodiments of the backup system 100 may optionally implement a “drop behind” procedure in which data (for example, file data and/or metadata) is dropped from the cache 120 after the data has been transferred to the archive target 140. Such embodiments advantageously improve cache utilization and reduce the impact of the prefetch process on other processing threads that also may be attempting to store data in the cache 120.
In the backup system 100 illustrated in
In some embodiments, the processor 105 is remote from the storage device 110, cache 120, and/or the archive target 140. In other embodiments, the processor 105 (or processors) may be included in one or more of the components of the backup system 100. For example, a node of a distributed file system may comprise the processor 105, the storage device 110, and the cache 120. Multiple processors are used in certain embodiments.
The processor 105 may be a general purpose computer using one or more microprocessors, such as, for example, a Pentium processor, a Pentium II processor, a Pentium Pro processor, a Pentium IV processor, an x86 processor, an 8051 processor, a MIPS processor, a Power PC processor, a SPARC processor, an Alpha processor, and so forth. In other embodiments, the processor 105 may be a special purpose computer comprising one or more integrated circuits such as application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), and so forth.
The backup system 100 may be configured to operate with a variety of one or more operating systems that perform standard operating system functions such as accessing, opening, reading, writing, and closing a file. It is recognized that other operating systems may be used, such as, for example, Microsoft® Windows® 3. X, Microsoft® Windows® 98, Microsoft® Windows® 2000, Microsoft® Windows® NT, Microsoft® Windows® Vista®, Microsoft® Windows® CE, Microsoft® Windows® ME, Palm Pilot OS, Apple® MacOS®, Disk Operating System (DOS), UNIX, IRIX, Solaris, SunOS, FreeBSD, Linux®, IBM® OS/2® operating systems, and so forth.
II. Example File System
In general, various embodiments of the disclosed systems and methods relate to reading objects in a file system. In some embodiments, the objects may include files and/or directories. As used herein, a file is a collection of data stored in one unit under a filename. A directory, similar to a file, is a collection of data stored in one unit under a directory name. A directory, however, is a specialized collection of data regarding elements in a file system. In one embodiment, a file system is organized in a tree-like structure. Directories are organized like the branches of trees. Directories may begin with a root directory and/or may include other branching directories (for example, subdirectories). Files resemble the leaves or the fruit of the tree. Files, typically, do not include other elements in the file system, such as files and directories. In other words, files do not typically branch.
In some embodiments of a file system, metadata structures, also referred to as inodes, are used to monitor and manipulate the files and directories within the file system. An inode is a data structure that describes a file or directory and may be stored in a variety of locations including in the storage device 110, other storage devices, and/or in memory. In some embodiments, each inode points to one or more locations on a physical disk that store the data associated with a file or directory. The inode in-memory may include a copy of the on-disk data plus additional data used by the system, including fields associated with the data structure. Although in certain illustrated embodiments an inode represents either a file or a directory, in other embodiments, an inode may include metadata for other elements in a distributed file system, in other distributed systems, in other file systems, or in other systems.
In embodiments described herein, data in the file system may be organized into data blocks. Conceptually, a data block may be any size of data, such as a single bit, a byte, a gigabyte, or even larger. In general, a data block is the smallest logical unit of data storage in the file system. In some embodiments, a file system may use data block sizes that are different from the native block size of a physical disk. For example, a physical disk may have a native size of 512 bytes, but a file system may address 4096 bytes or 8192 bytes.
In other embodiments, the portion of the file system 200 may have a different number of files and/or directories than is shown in
III. Example Backup Systems and Methods
As described above, certain embodiments of the backup system may utilize multiple processing threads in order to at least partially parallelize prefetching file system data from the storage device 110 and transferring the data to the archive target 140 via the communication medium 130. One possible advantage of such embodiments is that performance of the backup process may be improved, particularly when the file system includes many small files and/or when disk I/O performance is less than network I/O performance (see, for example,
In certain embodiments, two separate processing threads are utilized by the system: a “prefetch thread” and a “work thread.” In certain such embodiments, the prefetch thread and/or the work thread may be executed by the processor 105 shown in
In some embodiments, the prefetch thread sleeps when the amount of prefetched data exceeds a first threshold referred to as a “high water mark” (HWM). As the work thread transfers the prefetched data, the amount of untransferred, cached data decreases. In some embodiments, the prefetch thread wakes when the amount of untransferred, cached data decreases below a second threshold referred to as a “low water mark” (LWM). The high water mark and/or the low water mark may be selected, for example, so that the prefetch thread uses a reasonable amount of memory in the cache 120 for storing the prefetched data, and/or so that the work thread does not stall while waiting for data to transfer. In some embodiments, a drop-behind procedure is used to drop data from the cache 120 after the file system object has been transferred to the archive target 140.
Although certain embodiments of the backup system use two processing threads, other embodiments may use a different number of threads including one, three, four, six, fifteen, or more threads. Also, other embodiments of the backup system may combine or allocate differently some or all of the functions performed by the prefetch thread and the work thread. Additional details of these and other embodiments will be further described below.
A. Example Backup Processes
In the example shown in
In certain embodiments, the HWM and/or the LWM can be adjusted by a user, for example, to tune the system so that the backup process uses a reasonable amount of system resources (for example, cache) and/or has a sufficiently low latency. In certain such embodiments, the HWM and/or the LWM may be dynamically adjustable based on, for example, current CPU and memory usage, transfer speed of the communication medium 130, transfer speed of the storage device 110, and so forth. In some embodiments, default values for the LWM may include 10 MB for the physical size of the data and/or 1000 for the number of files. Default values for the HWM may include 20 MB for the physical size of the data and/or 2000 for the number of files.
The prefetch thread traverses a portion of the file system to identify an object to be prefetched. In the example illustrated in
During the traversal, the prefetch thread determines a size that is representative of the object (for example, file or directory data and/or metadata). As discussed above, the size may represent the physical size of the data and/or metadata associated with the object on the storage device 110. Because the physical size of a directory's metadata generally is relatively small compared to files, physical sizes of directories are not taken into account in some embodiments. In other embodiments, the size may represent a numerical count associated with the object. In the example shown in
The prefetch thread determines whether to represent the object in the queue based at least in part on one or more factors including the size determined to be representative of the object and a cumulative size of objects currently represented in the queue. For example, if the cumulative size is less than a threshold (for example, the HWM), the prefetch thread represents the object in the queue. If the cumulative size of the queue exceeds the threshold, the prefetch thread does not add the object to the queue. In the example shown in
Various states of the queue and the prefetch and work threads will now be described with reference to
The prefetch thread continues to traverse the file system and identifies the next object, which in a depth-first traversal is dir2/. The current size of the queue {1 object, 0 MB} is below the HWM, so in state (ii), the prefetch thread adds a representation of dir2/ (index 2 in
After the prefetch thread represents a particular object in the queue, the prefetch thread may prefetch at least a portion of the data and/or metadata associated with the object. The prefetch thread may prefetch locks, inodes, data blocks, and so forth. The prefetch thread may store the prefetched data and/or metadata in the cache 120 or in any other suitable memory. In some embodiments, the prefetch thread may not prefetch all the data (for example, file data and/or metadata) associated with the object. For example, the prefetch thread may prefetch only the data blocks at the beginning of the object (for example, the first 512 kB).
In state (iv), after the representation of file2 (index 4 in
Returning to state (i), after dir1/ has been represented in the queue, the queue is non-empty, and the work thread wakes and begins transferring the prefetched data to the archive target 140. In embodiments in which the prefetch thread did not prefetch all the data associated with the object, the work thread may issue routine operating system prefetch or read-ahead calls for the data blocks that were not prefetched by the prefetch thread. As shown in states (vi)-(ix), the work thread pointer moves to the right to indicate that objects in the queue have been consumed (for example, transferred to the target archive 140). As the work thread consumes the queue, the work thread may update the cumulative size of the objects currently represented in the queue. In some embodiments, the work thread updates the cumulative size whenever a certain amount of data has been transferred. In one embodiment, the update is performed each time 512 kB of data is transferred by the work thread. As can be seen in the example in
The file2 (index 4 in
As file2 is consumed by the work thread, the cumulative size of the untransferred portion of the queue decreases. To reduce the likelihood that the queue will be entirely consumed by the work thread, in certain embodiments, the work thread wakes the prefetch thread when the cumulative size of the untransferred objects represented in the queue decreases below the LWM threshold. The prefetch thread begins again to traverse the file system, starting from the place in the file system at which the prefetch thread last went to sleep. In some embodiments, a token such as a cookie is used to indicate the place in the file system where the traversal is to start. In a similar fashion as described above, the prefetch thread continues to traverse the file system and to represent objects in the queue until, for example, the HWM is reached (or all of the data being transferred has been transferred).
In
In this example, when the work thread reaches a point in file2 where 10 MB remain to be transferred, the amount of untransferred data represented in the queue reaches the LWM. In some embodiments, as described above with reference to step (x) of
In some embodiments, the backup system utilizes a drop-behind procedure in which some or all of the data (and/or metadata) blocks that have been transferred to the archive stream are dropped from the cache. These embodiments advantageously allow the freed memory to be used to store additional data prefetched by the prefetch thread or to be used by other processing threads. In the example shown in
B. Example Methods for Traversing and Using Objects in a File System
Although certain embodiments described herein relate to backing up objects in a file system to a target archive (for example, a tape drive), other embodiments of the systems and methods may advantageously be used for replicating a disk, indexing and/or searching file systems, generating a cryptographic hash, and so forth.
In state 510, the prefetch thread traverses the file system and gets the next object in a portion of the file system. The portion may include the entire file system. The prefetch thread may use any suitable traversal method such as, for example, a depth-first traversal. The objects in the file system may include files and/or directories. In state 520, the prefetch thread calculates the quantity of data to represent in a data structure such as, for example, a queue. In some embodiments, the prefetch thread determines a size for the object, and a cumulative size for objects currently represented the data structure. If the cumulative size is below one or more thresholds, which may be, for example, the high water mark (HWM) described above, the prefetch thread represents the object in the data structure. The cumulative size represented by the data structure may be updated to reflect addition of the object. In certain embodiments, the update is performed by the work thread (see, for example, state 640 in
In state 530, the prefetch thread determines whether the work thread is sleeping. If the work thread is asleep, in state 540 the prefetch thread wakes the work thread to begin using the objects represented in the data structure. If the work thread is not asleep, the prefetch thread determines in state 550 whether the cumulative size of the objects represented in the data structure has reached the HWM. If the cumulative size equals or exceeds the HWM, the prefetch thread sleeps (state 560) until awoken by the work thread (see, for example, state 660 in
The following is an example of pseudocode for an embodiment of a prefetch thread. It will be appreciated by one of ordinary skill in the art that there are many ways to implement a prefetch thread.
Some embodiments of the work thread implement a drop-behind procedure. For example, in state 630 the work thread may drop from the cache data and/or metadata that has been transferred to the target stream. As discussed above, in some embodiments, metadata associated with an object is not dropped from the cache until all the data and metadata associated with the object has been transferred.
In state 640, the work thread updates the cumulative size of the queue. In some embodiments the cumulative size is updated based on factors including, for example, the amount of data/metadata transferred since the last update and/or the elapsed time since the last update. In state 650, the work thread determines whether the updated cumulative size of the queue has decreased below one or more thresholds, which may be, for example, the low water mark (LWM) described above. If the cumulative size is below the LWM, in state 655 the work thread determines whether the prefetch thread is sleeping. If the prefetch thread is asleep, in state 660 the work thread wakes up the prefetch thread so that the prefetch thread can continue traversing the file system to represent additional objects in the queue. If the cumulative size of the queue exceeds the LWM, in state 670 the work thread determines whether there are any remaining objects represented in the queue. If the queue is empty, in state 680 the work thread sleeps until awoken, for example, by the prefetch thread in state 540 of
The following is an example of pseudocode for an embodiment of a work thread. It will be appreciated by one of ordinary skill in the art that there are many ways to implement a work thread.
IV. Other Embodiments
While certain embodiments of the invention have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the present invention.
| Number | Name | Date | Kind |
|---|---|---|---|
| 5163131 | Row et al. | Nov 1992 | A |
| 5181162 | Smith et al. | Jan 1993 | A |
| 5212784 | Sparks | May 1993 | A |
| 5230047 | Frey et al. | Jul 1993 | A |
| 5251206 | Calvignac et al. | Oct 1993 | A |
| 5258984 | Menon et al. | Nov 1993 | A |
| 5329626 | Klein et al. | Jul 1994 | A |
| 5359594 | Gould et al. | Oct 1994 | A |
| 5403639 | Belsan et al. | Apr 1995 | A |
| 5459871 | Van Den Berg | Oct 1995 | A |
| 5481699 | Saether | Jan 1996 | A |
| 5548724 | Akizawa et al. | Aug 1996 | A |
| 5548795 | Au | Aug 1996 | A |
| 5568629 | Gentry et al. | Oct 1996 | A |
| 5596709 | Bond et al. | Jan 1997 | A |
| 5606669 | Bertin et al. | Feb 1997 | A |
| 5612865 | Dasgupta | Mar 1997 | A |
| 5649200 | Leblang et al. | Jul 1997 | A |
| 5657439 | Jones et al. | Aug 1997 | A |
| 5668943 | Attanasio et al. | Sep 1997 | A |
| 5680621 | Korenshtein | Oct 1997 | A |
| 5694593 | Baclawski | Dec 1997 | A |
| 5696895 | Hemphill et al. | Dec 1997 | A |
| 5734826 | Olnowich et al. | Mar 1998 | A |
| 5754756 | Watanabe et al. | May 1998 | A |
| 5761659 | Bertoni | Jun 1998 | A |
| 5774643 | Lubbers et al. | Jun 1998 | A |
| 5799305 | Bortvedt et al. | Aug 1998 | A |
| 5805578 | Stirpe et al. | Sep 1998 | A |
| 5805900 | Fagen et al. | Sep 1998 | A |
| 5806065 | Lomet | Sep 1998 | A |
| 5822790 | Mehrotra | Oct 1998 | A |
| 5862312 | Mann | Jan 1999 | A |
| 5870563 | Roper et al. | Feb 1999 | A |
| 5878410 | Zbikowski et al. | Mar 1999 | A |
| 5878414 | Hsiao et al. | Mar 1999 | A |
| 5884046 | Antonov | Mar 1999 | A |
| 5884098 | Mason, Jr. | Mar 1999 | A |
| 5884303 | Brown | Mar 1999 | A |
| 5890147 | Peltonen et al. | Mar 1999 | A |
| 5917998 | Cabrera et al. | Jun 1999 | A |
| 5933834 | Aichelen | Aug 1999 | A |
| 5943690 | Dorricott et al. | Aug 1999 | A |
| 5966707 | Van Huben et al. | Oct 1999 | A |
| 5996089 | Mann | Nov 1999 | A |
| 6000007 | Leung et al. | Dec 1999 | A |
| 6014669 | Slaughter et al. | Jan 2000 | A |
| 6021414 | Fuller | Feb 2000 | A |
| 6029168 | Frey | Feb 2000 | A |
| 6038570 | Hitz et al. | Mar 2000 | A |
| 6044367 | Wolff | Mar 2000 | A |
| 6052759 | Stallmo et al. | Apr 2000 | A |
| 6055543 | Christensen et al. | Apr 2000 | A |
| 6055564 | Phaal | Apr 2000 | A |
| 6070172 | Lowe | May 2000 | A |
| 6081833 | Okamoto et al. | Jun 2000 | A |
| 6081883 | Popelka et al. | Jun 2000 | A |
| 6108759 | Orcutt et al. | Aug 2000 | A |
| 6117181 | Dearth et al. | Sep 2000 | A |
| 6122754 | Litwin et al. | Sep 2000 | A |
| 6138126 | Hitz et al. | Oct 2000 | A |
| 6154854 | Stallmo | Nov 2000 | A |
| 6173374 | Heil et al. | Jan 2001 | B1 |
| 6202085 | Benson et al. | Mar 2001 | B1 |
| 6209059 | Ofer et al. | Mar 2001 | B1 |
| 6219693 | Napolitano et al. | Apr 2001 | B1 |
| 6279007 | Uppala | Aug 2001 | B1 |
| 6321345 | Mann | Nov 2001 | B1 |
| 6334168 | Islam et al. | Dec 2001 | B1 |
| 6353823 | Kumar | Mar 2002 | B1 |
| 6384626 | Tsai et al. | May 2002 | B2 |
| 6385626 | Tamer et al. | May 2002 | B1 |
| 6393483 | Latif et al. | May 2002 | B1 |
| 6397311 | Capps | May 2002 | B1 |
| 6405219 | Saether et al. | Jun 2002 | B2 |
| 6408313 | Campbell et al. | Jun 2002 | B1 |
| 6415259 | Wolfinger et al. | Jul 2002 | B1 |
| 6421781 | Fox et al. | Jul 2002 | B1 |
| 6434574 | Day et al. | Aug 2002 | B1 |
| 6449730 | Mann | Sep 2002 | B2 |
| 6453389 | Weinberger et al. | Sep 2002 | B1 |
| 6457139 | D'Errico et al. | Sep 2002 | B1 |
| 6463442 | Bent et al. | Oct 2002 | B1 |
| 6496842 | Lyness | Dec 2002 | B1 |
| 6499091 | Bergsten | Dec 2002 | B1 |
| 6502172 | Chang | Dec 2002 | B2 |
| 6502174 | Beardsley | Dec 2002 | B1 |
| 6523130 | Hickman et al. | Feb 2003 | B1 |
| 6526478 | Kirby | Feb 2003 | B1 |
| 6546443 | Kakivaya et al. | Apr 2003 | B1 |
| 6549513 | Chao et al. | Apr 2003 | B1 |
| 6557114 | Mann | Apr 2003 | B2 |
| 6567894 | Hsu et al. | May 2003 | B1 |
| 6567926 | Mann | May 2003 | B2 |
| 6571244 | Larson | May 2003 | B1 |
| 6571349 | Mann | May 2003 | B1 |
| 6574745 | Mann | Jun 2003 | B2 |
| 6594655 | Tal et al. | Jul 2003 | B2 |
| 6594660 | Berkowitz et al. | Jul 2003 | B1 |
| 6594744 | Humlicek et al. | Jul 2003 | B1 |
| 6598174 | Parks et al. | Jul 2003 | B1 |
| 6618798 | Burton et al. | Sep 2003 | B1 |
| 6631411 | Welter et al. | Oct 2003 | B1 |
| 6658554 | Moshovos et al. | Dec 2003 | B1 |
| 6662184 | Friedberg | Dec 2003 | B1 |
| 6671686 | Pardon et al. | Dec 2003 | B2 |
| 6671704 | Sripada et al. | Dec 2003 | B1 |
| 6687805 | Cochran | Feb 2004 | B1 |
| 6725392 | Frey et al. | Apr 2004 | B1 |
| 6732125 | Autrey et al. | May 2004 | B1 |
| 6742020 | Dimitroff et al. | May 2004 | B1 |
| 6748429 | Talluri et al. | Jun 2004 | B1 |
| 6801949 | Bruck et al. | Oct 2004 | B1 |
| 6848029 | Coldewey | Jan 2005 | B2 |
| 6856591 | Ma et al. | Feb 2005 | B1 |
| 6871295 | Ulrich et al. | Mar 2005 | B2 |
| 6895482 | Blackmon et al. | May 2005 | B1 |
| 6895534 | Wong et al. | May 2005 | B2 |
| 6907011 | Miller et al. | Jun 2005 | B1 |
| 6907520 | Parady | Jun 2005 | B2 |
| 6917942 | Burns et al. | Jul 2005 | B1 |
| 6920494 | Heitman et al. | Jul 2005 | B2 |
| 6922696 | Lincoln et al. | Jul 2005 | B1 |
| 6922708 | Sedlar | Jul 2005 | B1 |
| 6934878 | Massa et al. | Aug 2005 | B2 |
| 6940966 | Lee | Sep 2005 | B2 |
| 6954435 | Billhartz et al. | Oct 2005 | B2 |
| 6990604 | Binger | Jan 2006 | B2 |
| 6990611 | Busser | Jan 2006 | B2 |
| 7007044 | Rafert et al. | Feb 2006 | B1 |
| 7007097 | Huffman et al. | Feb 2006 | B1 |
| 7017003 | Murotani et al. | Mar 2006 | B2 |
| 7043485 | Manley et al. | May 2006 | B2 |
| 7043567 | Trantham | May 2006 | B2 |
| 7069320 | Chang et al. | Jun 2006 | B1 |
| 7103597 | McGoveran | Sep 2006 | B2 |
| 7111305 | Solter et al. | Sep 2006 | B2 |
| 7113938 | Highleyman et al. | Sep 2006 | B2 |
| 7124264 | Yamashita | Oct 2006 | B2 |
| 7146524 | Patel et al. | Dec 2006 | B2 |
| 7152182 | Ji et al. | Dec 2006 | B2 |
| 7177295 | Sholander et al. | Feb 2007 | B1 |
| 7181746 | Perycz et al. | Feb 2007 | B2 |
| 7184421 | Liu et al. | Feb 2007 | B1 |
| 7194487 | Kekre et al. | Mar 2007 | B1 |
| 7206805 | McLaughlin, Jr. | Apr 2007 | B1 |
| 7225204 | Manley et al. | May 2007 | B2 |
| 7228299 | Harmer et al. | Jun 2007 | B1 |
| 7240235 | Lewalski-Brechter | Jul 2007 | B2 |
| 7249118 | Sandler et al. | Jul 2007 | B2 |
| 7257257 | Anderson et al. | Aug 2007 | B2 |
| 7290056 | McLaughlin, Jr. | Oct 2007 | B1 |
| 7313614 | Considine et al. | Dec 2007 | B2 |
| 7318134 | Oliveira et al. | Jan 2008 | B1 |
| 7346346 | Lipsit | Mar 2008 | B2 |
| 7346720 | Fachan | Mar 2008 | B2 |
| 7373426 | Jinmei et al. | May 2008 | B2 |
| 7386675 | Fachan | Jun 2008 | B2 |
| 7386697 | Case et al. | Jun 2008 | B1 |
| 7440966 | Adkins et al. | Oct 2008 | B2 |
| 7451341 | Okaki et al. | Nov 2008 | B2 |
| 7509448 | Fachan et al. | Mar 2009 | B2 |
| 7509524 | Patel et al. | Mar 2009 | B2 |
| 7533298 | Smith et al. | May 2009 | B2 |
| 7546354 | Fan et al. | Jun 2009 | B1 |
| 7546412 | Ahmad et al. | Jun 2009 | B2 |
| 7551572 | Passey et al. | Jun 2009 | B2 |
| 7558910 | Alverson et al. | Jul 2009 | B2 |
| 7571348 | Deguchi et al. | Aug 2009 | B2 |
| 7577258 | Wiseman et al. | Aug 2009 | B2 |
| 7577667 | Hinshaw et al. | Aug 2009 | B2 |
| 7590652 | Passey et al. | Sep 2009 | B2 |
| 7593938 | Lemar et al. | Sep 2009 | B2 |
| 7596713 | Mani-Meitav et al. | Sep 2009 | B2 |
| 7631066 | Schatz et al. | Dec 2009 | B1 |
| 7676691 | Fachan et al. | Mar 2010 | B2 |
| 7680836 | Anderson et al. | Mar 2010 | B2 |
| 7680842 | Anderson et al. | Mar 2010 | B2 |
| 7685126 | Patel et al. | Mar 2010 | B2 |
| 7689597 | Bingham et al. | Mar 2010 | B1 |
| 7716262 | Pallapotu | May 2010 | B2 |
| 7739288 | Lemar et al. | Jun 2010 | B2 |
| 7743033 | Patel et al. | Jun 2010 | B2 |
| 7752402 | Fachan et al. | Jul 2010 | B2 |
| 7756898 | Passey et al. | Jul 2010 | B2 |
| 7779048 | Fachan et al. | Aug 2010 | B2 |
| 7783666 | Zhuge et al. | Aug 2010 | B1 |
| 7788303 | Mikesell et al. | Aug 2010 | B2 |
| 7797283 | Fachan et al. | Sep 2010 | B2 |
| 7822932 | Fachan et al. | Oct 2010 | B2 |
| 7840536 | Ahal et al. | Nov 2010 | B1 |
| 7870345 | Daud et al. | Jan 2011 | B2 |
| 20010042224 | Stanfill et al. | Nov 2001 | A1 |
| 20010047451 | Noble et al. | Nov 2001 | A1 |
| 20010056492 | Bressoud et al. | Dec 2001 | A1 |
| 20020010696 | Izumi | Jan 2002 | A1 |
| 20020029200 | Dulin et al. | Mar 2002 | A1 |
| 20020035668 | Nakano et al. | Mar 2002 | A1 |
| 20020038436 | Suzuki | Mar 2002 | A1 |
| 20020055940 | Elkan | May 2002 | A1 |
| 20020072974 | Pugliese et al. | Jun 2002 | A1 |
| 20020075870 | de Azevedo et al. | Jun 2002 | A1 |
| 20020078161 | Cheng | Jun 2002 | A1 |
| 20020078180 | Miyazawa | Jun 2002 | A1 |
| 20020083078 | Pardon et al. | Jun 2002 | A1 |
| 20020083118 | Sim | Jun 2002 | A1 |
| 20020087366 | Collier et al. | Jul 2002 | A1 |
| 20020095438 | Rising et al. | Jul 2002 | A1 |
| 20020107877 | Whiting et al. | Aug 2002 | A1 |
| 20020124137 | Ulrich et al. | Sep 2002 | A1 |
| 20020138559 | Ulrich et al. | Sep 2002 | A1 |
| 20020156840 | Ulrich et al. | Oct 2002 | A1 |
| 20020156891 | Ulrich et al. | Oct 2002 | A1 |
| 20020156973 | Ulrich et al. | Oct 2002 | A1 |
| 20020156974 | Ulrich et al. | Oct 2002 | A1 |
| 20020156975 | Staub et al. | Oct 2002 | A1 |
| 20020158900 | Hsieh et al. | Oct 2002 | A1 |
| 20020161846 | Ulrich et al. | Oct 2002 | A1 |
| 20020161850 | Ulrich et al. | Oct 2002 | A1 |
| 20020161973 | Ulrich et al. | Oct 2002 | A1 |
| 20020163889 | Yemini et al. | Nov 2002 | A1 |
| 20020165942 | Ulrich et al. | Nov 2002 | A1 |
| 20020166026 | Ulrich et al. | Nov 2002 | A1 |
| 20020166079 | Ulrich et al. | Nov 2002 | A1 |
| 20020169827 | Ulrich et al. | Nov 2002 | A1 |
| 20020170036 | Cobb et al. | Nov 2002 | A1 |
| 20020174295 | Ulrich et al. | Nov 2002 | A1 |
| 20020174296 | Ulrich et al. | Nov 2002 | A1 |
| 20020178162 | Ulrich et al. | Nov 2002 | A1 |
| 20020191311 | Ulrich et al. | Dec 2002 | A1 |
| 20020194523 | Ulrich et al. | Dec 2002 | A1 |
| 20020194526 | Ulrich et al. | Dec 2002 | A1 |
| 20020198864 | Ostermann et al. | Dec 2002 | A1 |
| 20030005159 | Kumhyr | Jan 2003 | A1 |
| 20030009511 | Giotta et al. | Jan 2003 | A1 |
| 20030014391 | Evans et al. | Jan 2003 | A1 |
| 20030033308 | Patel et al. | Feb 2003 | A1 |
| 20030061491 | Jaskiewicz et al. | Mar 2003 | A1 |
| 20030109253 | Fenton et al. | Jun 2003 | A1 |
| 20030120863 | Lee et al. | Jun 2003 | A1 |
| 20030125852 | Schade et al. | Jul 2003 | A1 |
| 20030126522 | English et al. | Jul 2003 | A1 |
| 20030135514 | Patel et al. | Jul 2003 | A1 |
| 20030149750 | Franzenburg | Aug 2003 | A1 |
| 20030158873 | Sawdon et al. | Aug 2003 | A1 |
| 20030161302 | Zimmermann et al. | Aug 2003 | A1 |
| 20030163726 | Kidd | Aug 2003 | A1 |
| 20030172149 | Edsall et al. | Sep 2003 | A1 |
| 20030177308 | Lewalski-Brechter | Sep 2003 | A1 |
| 20030182325 | Manley et al. | Sep 2003 | A1 |
| 20030233385 | Srinivasa et al. | Dec 2003 | A1 |
| 20040003053 | Williams | Jan 2004 | A1 |
| 20040024731 | Cabrera et al. | Feb 2004 | A1 |
| 20040024963 | Talagala et al. | Feb 2004 | A1 |
| 20040078680 | Hu et al. | Apr 2004 | A1 |
| 20040078812 | Calvert | Apr 2004 | A1 |
| 20040117802 | Green | Jun 2004 | A1 |
| 20040133670 | Kaminsky et al. | Jul 2004 | A1 |
| 20040143647 | Cherkasova | Jul 2004 | A1 |
| 20040153479 | Mikesell et al. | Aug 2004 | A1 |
| 20040158549 | Matena et al. | Aug 2004 | A1 |
| 20040174798 | Riguidel et al. | Sep 2004 | A1 |
| 20040189682 | Troyansky et al. | Sep 2004 | A1 |
| 20040199734 | Rajamani et al. | Oct 2004 | A1 |
| 20040199812 | Earl et al. | Oct 2004 | A1 |
| 20040205141 | Goland | Oct 2004 | A1 |
| 20040230748 | Ohba | Nov 2004 | A1 |
| 20040240444 | Matthews et al. | Dec 2004 | A1 |
| 20040260673 | Hitz et al. | Dec 2004 | A1 |
| 20040267747 | Choi et al. | Dec 2004 | A1 |
| 20050010592 | Guthrie | Jan 2005 | A1 |
| 20050033778 | Price | Feb 2005 | A1 |
| 20050044197 | Lai | Feb 2005 | A1 |
| 20050066095 | Mullick et al. | Mar 2005 | A1 |
| 20050114402 | Guthrie | May 2005 | A1 |
| 20050114609 | Shorb | May 2005 | A1 |
| 20050125456 | Hara et al. | Jun 2005 | A1 |
| 20050131860 | Livshits | Jun 2005 | A1 |
| 20050131990 | Jewell | Jun 2005 | A1 |
| 20050138195 | Bono | Jun 2005 | A1 |
| 20050138252 | Gwilt | Jun 2005 | A1 |
| 20050171960 | Lomet | Aug 2005 | A1 |
| 20050171962 | Martin et al. | Aug 2005 | A1 |
| 20050187889 | Yasoshima | Aug 2005 | A1 |
| 20050188052 | Ewanchuk et al. | Aug 2005 | A1 |
| 20050192993 | Messinger | Sep 2005 | A1 |
| 20050289169 | Adya et al. | Dec 2005 | A1 |
| 20050289188 | Nettleton et al. | Dec 2005 | A1 |
| 20060004760 | Clift et al. | Jan 2006 | A1 |
| 20060041894 | Cheng | Feb 2006 | A1 |
| 20060047713 | Gornshtein et al. | Mar 2006 | A1 |
| 20060047925 | Perry | Mar 2006 | A1 |
| 20060053263 | Prahlad et al. | Mar 2006 | A1 |
| 20060059467 | Wong | Mar 2006 | A1 |
| 20060074922 | Nishimura | Apr 2006 | A1 |
| 20060083177 | Iyer et al. | Apr 2006 | A1 |
| 20060095438 | Fachan et al. | May 2006 | A1 |
| 20060101062 | Godman et al. | May 2006 | A1 |
| 20060129584 | Hoang et al. | Jun 2006 | A1 |
| 20060129631 | Na et al. | Jun 2006 | A1 |
| 20060129983 | Feng | Jun 2006 | A1 |
| 20060155831 | Chandrasekaran | Jul 2006 | A1 |
| 20060206536 | Sawdon et al. | Sep 2006 | A1 |
| 20060230411 | Richter et al. | Oct 2006 | A1 |
| 20060277432 | Patel | Dec 2006 | A1 |
| 20060288161 | Cavallo | Dec 2006 | A1 |
| 20060294589 | Achanta et al. | Dec 2006 | A1 |
| 20070038887 | Witte et al. | Feb 2007 | A1 |
| 20070091790 | Passey et al. | Apr 2007 | A1 |
| 20070094269 | Mikesell et al. | Apr 2007 | A1 |
| 20070094277 | Fachan et al. | Apr 2007 | A1 |
| 20070094310 | Passey et al. | Apr 2007 | A1 |
| 20070094431 | Fachan | Apr 2007 | A1 |
| 20070094452 | Fachan | Apr 2007 | A1 |
| 20070168351 | Fachan | Jul 2007 | A1 |
| 20070171919 | Godman et al. | Jul 2007 | A1 |
| 20070192254 | Hinkle | Aug 2007 | A1 |
| 20070195810 | Fachan | Aug 2007 | A1 |
| 20070233684 | Verma et al. | Oct 2007 | A1 |
| 20070233710 | Passey et al. | Oct 2007 | A1 |
| 20070255765 | Robinson | Nov 2007 | A1 |
| 20080005145 | Worrall | Jan 2008 | A1 |
| 20080010507 | Vingralek | Jan 2008 | A1 |
| 20080021907 | Patel et al. | Jan 2008 | A1 |
| 20080031238 | Harmelin et al. | Feb 2008 | A1 |
| 20080034004 | Cisler et al. | Feb 2008 | A1 |
| 20080044016 | Henzinger | Feb 2008 | A1 |
| 20080046432 | Anderson et al. | Feb 2008 | A1 |
| 20080046443 | Fachan et al. | Feb 2008 | A1 |
| 20080046444 | Fachan et al. | Feb 2008 | A1 |
| 20080046445 | Passey et al. | Feb 2008 | A1 |
| 20080046475 | Anderson et al. | Feb 2008 | A1 |
| 20080046476 | Anderson et al. | Feb 2008 | A1 |
| 20080046667 | Fachan et al. | Feb 2008 | A1 |
| 20080059541 | Fachan et al. | Mar 2008 | A1 |
| 20080059734 | Mizuno | Mar 2008 | A1 |
| 20080126365 | Fachan et al. | May 2008 | A1 |
| 20080151724 | Anderson et al. | Jun 2008 | A1 |
| 20080154978 | Lemar et al. | Jun 2008 | A1 |
| 20080155191 | Anderson et al. | Jun 2008 | A1 |
| 20080168304 | Flynn et al. | Jul 2008 | A1 |
| 20080168458 | Fachan et al. | Jul 2008 | A1 |
| 20080243773 | Patel et al. | Oct 2008 | A1 |
| 20080256103 | Fachan et al. | Oct 2008 | A1 |
| 20080256537 | Fachan et al. | Oct 2008 | A1 |
| 20080256545 | Fachan et al. | Oct 2008 | A1 |
| 20080294611 | Anglin et al. | Nov 2008 | A1 |
| 20090055399 | Lu et al. | Feb 2009 | A1 |
| 20090055604 | Lemar et al. | Feb 2009 | A1 |
| 20090055607 | Schack et al. | Feb 2009 | A1 |
| 20090125563 | Wong et al. | May 2009 | A1 |
| 20090210880 | Fachan et al. | Aug 2009 | A1 |
| 20090248756 | Akidau et al. | Oct 2009 | A1 |
| 20090248765 | Akidau et al. | Oct 2009 | A1 |
| 20090248975 | Daud et al. | Oct 2009 | A1 |
| 20090249013 | Daud et al. | Oct 2009 | A1 |
| 20090252066 | Passey et al. | Oct 2009 | A1 |
| 20090327218 | Passey et al. | Dec 2009 | A1 |
| 20100161556 | Anderson et al. | Jun 2010 | A1 |
| 20100161557 | Anderson et al. | Jun 2010 | A1 |
| 20100185592 | Kryger | Jul 2010 | A1 |
| 20100223235 | Fachan | Sep 2010 | A1 |
| 20100235413 | Patel | Sep 2010 | A1 |
| 20100241632 | Lemar et al. | Sep 2010 | A1 |
| 20100306786 | Passey | Dec 2010 | A1 |
| Number | Date | Country |
|---|---|---|
| 0774723 | May 1997 | EP |
| 2006-506741 | Jun 2004 | JP |
| 4464279 | May 2010 | JP |
| 4504677 | Jul 2010 | JP |
| WO 9429796 | Dec 1994 | WO |
| WO 0057315 | Sep 2000 | WO |
| WO 0114991 | Mar 2001 | WO |
| WO 0133829 | May 2001 | WO |
| WO 02061737 | Aug 2002 | WO |
| WO 03012699 | Feb 2003 | WO |
| WO 2004046971 | Jun 2004 | WO |
| WO 2008021527 | Feb 2008 | WO |
| WO 2008021528 | Feb 2008 | WO |
| WO 2008127947 | Oct 2008 | WO |
| Number | Date | Country | |
|---|---|---|---|
| 20090055399 A1 | Feb 2009 | US |