System and method for managing small-size files in an aggregated file system

Information

  • Patent Grant
  • 8239354
  • Patent Number
    8,239,354
  • Date Filed
    Thursday, March 3, 2005
    19 years ago
  • Date Issued
    Tuesday, August 7, 2012
    11 years ago
Abstract
In an aggregated file system, a method of processing a user file retrieves user file metadata and user data from a metadata server and applies operations to the user data in accordance with a file open request from a client. At the end of the process, the method stores the processed user data at a location in accordance with a predefined rule and updates the metadata in the metadata server to reference the processed user data at the location. In some embodiments, the predefined rule is to choose a location between the metadata server and a separate storage server in accordance with the size of the processed user data. If the size is still smaller than a predetermined threshold, the user data is stored in the metadata server. Otherwise, the user data is stored in the storage server.
Description
RELATED APPLICATIONS

This application is related to U.S. patent application Ser. No. 10/043,413, entitled FILE SWITCH AND SWITCHED FILE SYSTEM, filed Jan. 10, 2002, and U.S. Provisional Patent Application No. 60/261,153, entitled FILE SWITCH AND SWITCHED FILE SYSTEM and filed Jan. 11, 2001, both of which are incorporated herein by reference.


FIELD OF THE INVENTION

The present invention relates generally to the field of network associated storage, and more specifically to systems and methods for managing small-size files in an aggregated file system.


BACKGROUND

An aggregated file system is typically used for hosting a large number of user files. Each user file includes two distinct portions, user data and metadata. User data is the actual data of a user file that is requested and processed by a client, while metadata is information characterizing the properties and state of the user data, e.g., its location in the file system. When a file switch receives a file open request for the user file, it first retrieves the metadata from a metadata server that is part of the file system. Based on the metadata, the file switch then retrieves different stripes of the user data from one or more storage servers in response to a subsequent file read/write request and applies operations to them accordingly. At the end of the process, the metadata and user data stripes are stored back in their respective hosting metadata server and storage servers.


When a user file includes a large number of user data stripes, this scheme can improve the throughput of the aggregated file system. However, when the user file is small, e.g., including only a single data stripe, this scheme has a serious impact on the performance of the system. One reason is that even in this case the scheme requires at least two round-trip visits, one from the file switch to a metadata server and the other from the file switch to a storage server. Therefore, there is a need for a more efficient scheme for managing small-size user files in an aggregated file system.


SUMMARY

A method of processing a user file retrieves user file metadata and user data from a metadata server and applies operations to the user data in accordance with a file open request from a client. At the end of the process, the method stores the processed user data at a location in accordance with a predefined rule and updates the metadata in the metadata server to reference the processed user data at the location. In some embodiments, the predefined rule is to choose a location between the metadata server and a separate storage server in accordance with the size of the processed user data. If the size is smaller than a predetermined threshold, the user data is stored in the metadata server. Otherwise, the user data is stored in the storage server.





BRIEF DESCRIPTION OF THE DRAWINGS

The aforementioned features and advantages of the invention as well as additional features and advantages thereof will be more clearly understood hereinafter as a result of a detailed description of embodiments of the invention when taken in conjunction with the drawings.



FIG. 1 is a diagram illustrating an exemplary network environment including an aggregated file system according to some embodiments of the present invention.



FIG. 2 is a flowchart illustrating how an aggregated file system operates in response to a file open request for a small-size user file according to some embodiments of the present invention.



FIG. 3 is a schematic diagram illustrating a file switch of the aggregated file system that is implemented using a computer system according to some embodiments of the present invention.





Like reference numerals refer to corresponding parts throughout the several views of the drawings.


DESCRIPTION OF EMBODIMENTS
Definitions

User File. A “user file” is a file that a client computer works with (e.g., to read, write, or modify the file's contents). A user file may be divided into data stripes and stored in multiple storage servers of an aggregated file system.


Stripe. In the context of a file switch, a “stripe” is a portion of a user file having a fixed size. In some cases, an entire user file will be contained in a single stripe. But if the file being striped is larger than the stripe size, the file will be split into two or more stripes.


Metadata File. In the context of a file switch, a “metadata file” is a file that contains the metadata of a user file and is stored in a designated metadata server. While an ordinary client may not directly access the content of a metadata file by issuing read or write commands, it nonetheless has indirect access to certain metadata information stored therein, such as file layout, file length, etc.


File Switch. A “file switch” is a device performing various file operations in accordance with client instructions. The file switch is logically positioned between a client computer and a set of servers. To the client computer, the file switch appears to be a file storage device having enormous storage capacities and high throughput. To the servers, the file switch appears to be a client computer. The file switch directs the storage of individual user files over the servers, using striping and mirroring techniques to improve the system's throughput and fault tolerance.


Overview


FIG. 1 illustrates an exemplary network environment including a plurality of clients 120, an aggregated file system 150 and a network 130. The network 130 may include the Internet, other wide area networks, local area networks, metropolitan area networks, wireless networks, and the like, or any combination thereof. A client 120 can be a personal computer, a personal digital assistant, a mobile phone, or any equivalents capable of connecting to the network 130. To access a particular user file, a client 120 typically submits one or more file access requests to the aggregated file system 150 through the network 130. The aggregated file system 150, in response, applies certain operations to the requested user file to satisfy the requests.


The aggregated file system 150 includes a group of storage servers 180, one or more metadata servers 170 and a group of file switches 160 having communication channels 165 with the storage servers 180 and the metadata servers 170, respectively. The aggregated file system 150 manages a large number of user files, each one having a unique file name. The aggregated file system 150 may be used to store many types of user files, including user files for storing data (e.g., database files, music files, MPEGs, videos, etc) and user files that contain applications and programs used by computer users, etc. These user files may range in size from a few bytes to multiple terabytes. Different types of user files may have dramatically distinct client access rates. For example, some files may be accessed very frequently (e.g., more than 50 times per hour on average, with peak access rates of over 100 times per hour) and others may be requested infrequently (e.g., less than once per day on average).


In some embodiments, a user file is typically split into a plurality of data stripes, each data stripe further including multiple stripe fragments with each fragment stored at one of the storage servers 180. The metadata of the user file is stored in a metadata server 170. As mentioned above, this storage scheme is desired for increasing the throughput of the aggregated file system 150, especially when processing an operation associated with a user file having a large amount of user data.


This storage scheme, however, requires a file switch to complete at least two transactions even when accessing a small user file that has only one user data stripe fragment. In particular, the file switch performs a first transaction to retrieve metadata from a metadata server, the metadata including such as information about the identity of a storage server hosting the user data stripe fragment. Second, the file switch performs a second transaction to retrieve the user data stripe fragment from the hosting storage server.


According to some embodiments, to improve the throughput of the file system when dealing with a small-size user file, the user data and metadata of the user file are no longer stored on two different servers. Instead, the user data resides on the same metadata server where the metadata is located. Further, a single access to the metadata server retrieves both the metadata and the user data to the requesting client and, as a result, the file access overhead is significantly reduced.


Process


FIG. 2 is a flowchart illustrating how an aggregated file system operates in response to a file open request for a small-size user file according to some embodiments of the present invention.


Upon receipt of a file open request for a user file from a client (210), a file switch visits a metadata server to retrieve metadata associated with the user file (220). The metadata includes information about the location of user data associated with the user file and the size of the user data. In some embodiments, if the size of the user data for a particular user file is smaller than a predetermined threshold (e.g., 8KB), the user data is stored in the same metadata server where the metadata is found. Otherwise, the user data is stored in one or more of the storage servers.


Therefore, in the case that the size of the user data is smaller than the threshold, the metadata server returns the user data to the file switch (235). In some embodiments, the user data is cached in the file switch to be processed according to subsequent client requests. Otherwise, the metadata server returns information identifying those storage servers hosting the user data (240). The file switch, in response to a subsequent file read/write request from the client, visits (i.e., sends requests to) the identified storage servers to retrieve one or more of the user data stripe fragments (243, 247).


In some embodiments, in response to at least some types of client requests the file switch processes the user data in accordance with the client request (250). In other embodiments, or in response to other types of client requests, the file switch delivers the user data to the requesting client computer through the network, waits for the client computer to apply operations to the user data, and then receives the processed user data from the client computer. The processing of the user data at the client or file switch, or both, may modify, replace or append data to the user data.


Depending on the size of the processed user data, it may or may not be desirable to store it in the metadata server. Therefore, the file switch needs to identify an appropriate location in the aggregated file system to store the processed user data.


In some embodiments, the file switch checks if a predetermined condition is met or not (260). If the user data is retrieved from a metadata server previously and the size of the processed (i.e., new or modified) user data is still below the predefined threshold, the processed user data is then sent back to the same metadata server, which overwrites the old copy therein with the processed user data (265). In other words, a user file that remains small after the process stays in the metadata server to facilitate efficient access.


Otherwise, the processed user data is stored in a storage server (270). Note that this scenario includes three sub-scenarios:

    • the user data is retrieved from a metadata server, and after the user data has been processed (e.g., by the file switch or client), the size of the processed user data is now above the predefined threshold;
    • the user data is retrieved from storage servers, and after the user data has been processed, the size of the processed user data is still above the predefined threshold; and
    • the user data is retrieved from storage servers, and after the user data has been processed the size of the processed user data is below the predefined threshold.


System operations in response to the first two sub-scenarios are straightforward. As long as the file size of a user file is above the predetermined threshold, a distributed storage scheme is employed to store the user data and the metadata separately. Note that in the first sub-scenario, the metadata server is responsible for updating the user file metadata with information about its newly designated hosting storage servers (at which the user data is now stored) so that a subsequent file switch operation will be able to determine where to retrieve the updated user data.


In contrast, the last sub-scenario requires special treatment. This sub-scenario occurs when the user data size of a user file that was above the threshold level now drops below that level, e.g., due to operations at a client or requested by a client. In some embodiments, since the user file has demonstrated a capability beyond the predetermined threshold associated with small-size files, the file is not treated as a small-size file despite its current small size, and its user data remains in the storage servers.


In an alternative embodiment, the user data is stored in a metadata server whenever its current size is below the predetermined threshold and is stored in the storage servers otherwise. This scheme may improve the throughput of the file system. However, if the user data size frequently moves above and below the threshold level, the benefit of a higher throughput may be outweighed by the cost of managing the transitions between the two user data storage regimes (i.e., transitions between a metadata server and the storage servers). In some embodiments, a system administrator is given an option of choosing a storage scheme for a user file based on its client access characteristics, e.g., how often a client updates the user data and the typical magnitude of user data update.


In each scenario, the metadata server updates the metadata associated with the user file to reference the user data at its current location (280). Information about the size of the user data may also be updated if the size of the file's user data has changed. Finally, the file switch sends a response to the client computer, notifying it that its requested operation has been completed (285).


In some embodiments, the predetermined threshold is the same for all the user files in the aggregated file system. In some embodiments, the threshold is configurable by a system administrator. In some other embodiments, different types of user files are associated with different thresholds. These thresholds may be determined in accordance with the client access characteristics associated with the different types of user files. For example, a user file (or user files of a particular type) which has a high client access rate (e.g., above a predefined access rate threshold) should be assigned a threshold value higher than that associated with a user file with a lower client access rate. As a result, the user data of a user file having a high client access rate is kept in a metadata server (along with its metadata) unless its size exceeds a second, higher predefined threshold, thereby improving the system's throughput.


After updating the user data of a user file and sending it back to the aggregated file system, a client computer may require a completion response from the system in order to proceed to next operation. In some embodiments, since different user files may have different requirements about data integrity, the file system may choose different moments of a client access transaction to respond in accordance with a predetermined write policy. For example, if the client computer submits a file write request that indicates, or is associated with a high data integrity requirement, a write-through I/O completion response is signaled only after the user data and metadata have been completely stored in the file system. On the other hand, if the client computer submits a file write request that indicates, or is associated with a lower data integrity requirement (which may be designated as the normal or default data integrity requirement in some embodiments), a write-back I/O completion response is signaled when the file switch receives the user data from the client computer. In the context of the process represented by FIG. 2, the latter option requires that the file switch notify the client computer of a completion of processing the user data before storing it in a metadata or storage server. In other words, step 285 of FIG. 2 would occur after step 250 but ahead of step 260.


A risk associated with the write-back I/O completion is that the metadata and/or user data of a user file may be lost when a system failure occurs before the data is completely written into a metadata or storage server, resulting in a corrupted file system. In contrast, the risk associated with the write-through I/O completion is significantly lower because the data has already been completely stored in a server upon the invocation of the option.


System Architecture

In some embodiments, a file switch 160 of the aggregated file system is implemented using a computer system schematically shown in FIG. 3. The file switch 160 includes one or more processing units (CPUs) 300, memory 309, one or more communication interfaces 305 for coupling the file switch to one or more communication networks 350, and one or more system buses 301 that interconnect these components. In one embodiment, the one or more communication interfaces 305 include network interface circuits (NIC) 304 for coupling the file switch to a network switch 303, with each of the network interface circuits 304 coupled to a respective communication network 350.


The file switch 160 may optionally have a user interface 302, although in some embodiments the file switch 160 is managed using a workstation connected to the file switch 160 via communications interface 305. In alternate embodiments, much of the functionality of the file switch may be implemented in one or more application specific integrated circuits (ASICs), thereby either eliminating the need for the CPU, or reducing the role of the CPU in the handling of file access requests initiated by clients 120. The file switch 160 may be interconnected to a plurality of clients 120, storage servers 180, and one or more metadata servers 170, by the one or more communications interfaces 305.


The memory 309 may include high speed random access memory and may also include non volatile memory, such as one or more magnetic disk storage devices. The memory 309 may include mass storage that is remotely located from the CPU(s) 300. The memory 309 stores the following elements, or a subset or superset of such elements:

    • an operating system 310 that includes procedures for handling various basic system services and for performing hardware dependent tasks;
    • a network communication module (or set of instructions) 311 that is used for controlling communication between the system and clients 120, storage servers 180 and metadata servers 170 via the network or communication interface circuit 304 and one or more communication networks (represented by network switch 303), such as the Internet, other wide area networks, local area networks, metropolitan area networks, or combinations of two or more of these networks;
    • a file switch module (or set of instructions) 312, for implementing many of the main aspects of the aggregated file system, the file switch module 312 including a file read module 313 and a file write module 314;
    • file state information 330, including transaction state information 331, open file state information 332 and locking state information 333; and
    • cached information 340 for caching metadata information of one or more user files being processed by the file switch.


The file switch module 312, the state information 330 and the cached information 340 may include executable procedures, sub-modules, tables or other data structures. In other embodiments, additional or different modules and data structures may be used, and some of the modules and/or data structures listed above may not be used. More detailed descriptions of the file read module 313 and the file write module 314 have been provided above in connection with FIG. 2. For example, when handling a small-size user file, the file read module 313 and the file write module 314 need only access a metadata server to retrieve or store both the metadata and user data.


Illustratively, one of the metadata severs 170 includes information about a plurality of user files. In particular, the metadata server 170 includes metadata and user data location information for user file A. To retrieve user file A, the file switch performs two transactions, one with the metadata server in response to a file open request and the other with the one or more storage servers designated by the user data location information in response to a subsequent file read/write request. In contrast, both metadata and user data of user file B are stored in the metadata server 170. A file switch only needs to perform one transaction, with a single metadata server, to retrieve user file B in response to a file open request.


Even though the aforementioned embodiments are discussed in connection with a file switch in an aggregated file system, it will be apparent to one skilled in the art that the present invention is equally applicable to any metadata-based data storage architecture that requires a software implementation.


The foregoing description, for purposes of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated.

Claims
  • 1. A method of processing a user file by a file switch in an aggregated file system having at least one storage sever and at least one metadata server including at least one user file, comprising: retrieving, by the file switch, metadata and user data associated with the user file from a metadata server;processing, by the file switch, the user data and the metadata associated with the user file in accordance with a request from a client;storing, by the file switch, the processed metadata associated with the user file in the metadata server;storing, by the file switch, the processed user data associated with the user file along with the stored processed metadata at a location within the metadata server designated by a predefined rule when a size of the processed user data is smaller than a predefined threshold;storing, by the file switch, the processed user data at a location within a storage server separate from the metadata server designated by the predefined rule when the size of the processed user data is greater than or equal to the predefined threshold; andupdating, by the file switch, the metadata in the metadata server to reference the user data at the designated location.
  • 2. The method of claim 1, wherein the retrieving includes retrieving, by the file switch, the metadata and user data in response to a file open request associated with the user file.
  • 3. The method of claim 1, further comprising, prior to said storing and updating, notifying, by the file switch, the client of a completion of processing the user data in response to the client request.
  • 4. The method of claim 1, wherein the metadata includes a size of the user data.
  • 5. The method of claim 4, wherein updating the metadata includes updating, by the file switch, the size of the user data in accordance with the processed user data.
  • 6. The method of claim 1, wherein the predefined threshold associated with the user file is different from a predefined threshold associated with a different user file.
  • 7. The method of claim 1, wherein the user data of the user file remains in the separate storage server irrespective of whether the size of the user data is smaller than the predefined threshold or not.
  • 8. The method of claim 1 further comprising retrieving, by the file switch, the processed user data and the metadata in a single transaction in response to a file open request from the client when the processed user data and the metadata are both stored in the metadata server.
  • 9. A file switch for use in a computer network having one or more metadata servers including at least one user file, one or more storage servers, and one or more client computers, the file switch comprising: at least one interface for exchanging information with the one or more metadata servers, the one or more storage servers and the one or more client computers;one or more processors; anda memory coupled to the one or more processors, the one or more processors configured to execute programmed instructions stored in the memory, the programmed instructions comprising: retrieving metadata and user data associated with a user file from a metadata server;processing the user data and the metadata associated with the user file in accordance with a request from a client computer;storing the processed metadata associated with the user file in the metadata server;storing the processed user data associated with the user file along with the stored processed metadata at a location within the metadata server designated by a predefined rule when a size of the processed user data is smaller than a predefined threshold;storing the processed user data at a location within a storage server separate from the metadata server designated by the predefined rule when the size of the processed user data is greater than or equal to the predefined threshold; andupdating the metadata in the metadata server to reference the user data at the designated location.
  • 10. The file switch of claim 9, wherein the retrieving includes retrieving the metadata and user data in response to a file open request associated with the user file.
  • 11. The file switch of claim 9, further comprising, prior to said storing and updating, notifying the client computer of a completion of processing the user file in response to the request.
  • 12. The file switch of claim 9, wherein the metadata includes a size of the user data.
  • 13. The file switch of claim 9, wherein the predefined threshold of the user file is different from a predefined threshold of a different user file.
  • 14. The file switch of claim 9, wherein the user data of the user file remains in the separate storage server irrespective of whether the size of the user data is smaller than the predefined threshold or not.
  • 15. The file switch of claim 9 further comprising retrieving the processed user data and the metadata in a single transaction in response to a file open request from the client computer when the processed user data and the metadata are both stored in the metadata server.
  • 16. A non-transitory computer readable medium having stored thereon instructions for processing a user file by a file switch in an aggregated file system having at least one storage sever and at least one metadata server including at least one user file, comprising machine executable code which when executed by at least one processor, causes the processor to perform steps comprising: retrieving metadata and user data associated with the user file from a metadata server;processing the user data and the metadata associated with the user file in accordance with a request from a client;storing the processed metadata associated with the user file in the metadata server;storing the processed user data associated with the user file along with the stored processed metadata at a location within the metadata server designated by a predefined rule when a size of the processed user data is smaller than a predefined threshold;storing the processed user data at a location within a storage server separate from the metadata server designated by the predefined rule when the size of the processed user data is greater than or equal to the predefined threshold; andupdating the metadata in the metadata server to reference the user data at the designated location.
  • 17. The medium as set forth in claim 16 wherein the retrieving includes retrieving, by the file switch, the metadata and user data in response to a file open request associated with the user file.
  • 18. The medium as set forth in claim 16 further comprising, prior to said storing and updating, notifying, by the file switch, the client of a completion of processing the user data in response to the client request.
  • 19. The medium as set forth in claim 16 wherein the metadata includes a size of the user data.
  • 20. The medium as set forth in claim 19 wherein updating the metadata includes updating, by the file switch, the size of the user data in accordance with the processed user data.
  • 21. The medium as set forth in claim 16 wherein the predefined threshold associated with the user file is different from a predefined threshold associated with a different user file.
  • 22. The medium as set forth in claim 16 wherein the user data of the user file remains in the separate storage server irrespective of whether the size of the user data is smaller than the predefined threshold or not.
US Referenced Citations (271)
Number Name Date Kind
4993030 Krakauer et al. Feb 1991 A
5218695 Noveck et al. Jun 1993 A
5303368 Kotaki Apr 1994 A
5473362 Fitzgerald et al. Dec 1995 A
5511177 Kagimasa et al. Apr 1996 A
5537585 Blickenstaff et al. Jul 1996 A
5548724 Akizawa et al. Aug 1996 A
5550965 Gabbe et al. Aug 1996 A
5583995 Gardner et al. Dec 1996 A
5586260 Hu Dec 1996 A
5590320 Maxey Dec 1996 A
5649194 Miller et al. Jul 1997 A
5649200 Leblang et al. Jul 1997 A
5668943 Attanasio et al. Sep 1997 A
5692180 Lee Nov 1997 A
5721779 Funk Feb 1998 A
5724512 Winterbottom Mar 1998 A
5806061 Chaudhuri et al. Sep 1998 A
5832496 Anand et al. Nov 1998 A
5832522 Blickenstaff et al. Nov 1998 A
5838970 Thomas Nov 1998 A
5862325 Reed et al. Jan 1999 A
5884303 Brown Mar 1999 A
5893086 Schmuck et al. Apr 1999 A
5897638 Lasser et al. Apr 1999 A
5905990 Inglett May 1999 A
5917998 Cabrera et al. Jun 1999 A
5920873 Van Huben et al. Jul 1999 A
5937406 Balabine et al. Aug 1999 A
5999664 Mahoney et al. Dec 1999 A
6012083 Savitzky et al. Jan 2000 A
6029168 Frey Feb 2000 A
6044367 Wolff Mar 2000 A
6047129 Frye Apr 2000 A
6072942 Stockwell et al. Jun 2000 A
6078929 Rao Jun 2000 A
6085234 Pitts et al. Jul 2000 A
6088694 Burns et al. Jul 2000 A
6128627 Mattis et al. Oct 2000 A
6128717 Harrison et al. Oct 2000 A
6161145 Bainbridge et al. Dec 2000 A
6161185 Guthrie et al. Dec 2000 A
6181336 Chiu et al. Jan 2001 B1
6223206 Dan et al. Apr 2001 B1
6233648 Tomita May 2001 B1
6237008 Beal et al. May 2001 B1
6256031 Meijer et al. Jul 2001 B1
6282610 Bergsten Aug 2001 B1
6289345 Yasue Sep 2001 B1
6308162 Ouimet et al. Oct 2001 B1
6324581 Xu et al. Nov 2001 B1
6339785 Feigenbaum Jan 2002 B1
6349343 Foody et al. Feb 2002 B1
6374263 Bunger et al. Apr 2002 B1
6389433 Bolosky et al. May 2002 B1
6393581 Friedman et al. May 2002 B1
6397246 Wolfe May 2002 B1
6412004 Chen et al. Jun 2002 B1
6438595 Blumenau et al. Aug 2002 B1
6477544 Bolosky et al. Nov 2002 B1
6487561 Ofek et al. Nov 2002 B1
6493804 Soltis et al. Dec 2002 B1
6516350 Lumelsky et al. Feb 2003 B1
6516351 Borr Feb 2003 B2
6549916 Sedlar Apr 2003 B1
6553352 Delurgio et al. Apr 2003 B2
6556997 Levy Apr 2003 B1
6556998 Mukherjee et al. Apr 2003 B1
6601101 Lee et al. Jul 2003 B1
6612490 Herrendoerfer et al. Sep 2003 B1
6721794 Taylor et al. Apr 2004 B2
6738790 Klein et al. May 2004 B1
6742035 Zayas et al. May 2004 B1
6748420 Quatrano et al. Jun 2004 B1
6757706 Dong et al. Jun 2004 B1
6775673 Mahalingam et al. Aug 2004 B2
6775679 Gupta Aug 2004 B2
6782450 Arnott et al. Aug 2004 B2
6801960 Ericson et al. Oct 2004 B1
6826613 Wang et al. Nov 2004 B1
6839761 Kadyk et al. Jan 2005 B2
6847959 Arrouye et al. Jan 2005 B1
6847970 Keller et al. Jan 2005 B2
6850997 Rooney et al. Feb 2005 B1
6889249 Miloushev et al. May 2005 B2
6922688 Frey, Jr. Jul 2005 B1
6934706 Mancuso et al. Aug 2005 B1
6938039 Bober et al. Aug 2005 B1
6938059 Tamer et al. Aug 2005 B2
6959373 Testardi Oct 2005 B2
6961815 Kistler et al. Nov 2005 B2
6973455 Vahalia et al. Dec 2005 B1
6973549 Testardi Dec 2005 B1
6985936 Agarwalla et al. Jan 2006 B2
6985956 Luke et al. Jan 2006 B2
6986015 Testardi Jan 2006 B2
6990547 Ulrich et al. Jan 2006 B2
6990667 Ulrich et al. Jan 2006 B2
6996841 Kadyk et al. Feb 2006 B2
7010553 Chen et al. Mar 2006 B2
7013379 Testardi Mar 2006 B1
7051112 Dawson May 2006 B2
7072917 Wong et al. Jul 2006 B2
7089286 Malik Aug 2006 B1
7111115 Peters et al. Sep 2006 B2
7113962 Kee et al. Sep 2006 B1
7120746 Campbell et al. Oct 2006 B2
7127556 Blumenau et al. Oct 2006 B2
7133967 Fujie et al. Nov 2006 B2
7146524 Patel et al. Dec 2006 B2
7152184 Maeda et al. Dec 2006 B2
7155466 Rodriguez et al. Dec 2006 B2
7165095 Sim Jan 2007 B2
7167821 Hardwick et al. Jan 2007 B2
7173929 Testardi Feb 2007 B1
7194579 Robinson et al. Mar 2007 B2
7234074 Cohn et al. Jun 2007 B2
7280536 Testardi Oct 2007 B2
7284150 Ma et al. Oct 2007 B2
7293097 Borr Nov 2007 B2
7293099 Kalajan Nov 2007 B1
7293133 Colgrove et al. Nov 2007 B1
7346664 Wong et al. Mar 2008 B2
7383288 Miloushev et al. Jun 2008 B2
7401220 Bolosky et al. Jul 2008 B2
7406484 Srinivasan et al. Jul 2008 B1
7415488 Muth et al. Aug 2008 B1
7415608 Bolosky et al. Aug 2008 B2
7440982 Lu et al. Oct 2008 B2
7475241 Patel et al. Jan 2009 B2
7477796 Sasaki et al. Jan 2009 B2
7509322 Miloushev et al. Mar 2009 B2
7512673 Miloushev et al. Mar 2009 B2
7519813 Cox et al. Apr 2009 B1
7562110 Miloushev et al. Jul 2009 B2
7571168 Bahar et al. Aug 2009 B2
7574433 Engel Aug 2009 B2
7599941 Bahar et al. Oct 2009 B2
7610307 Havewala et al. Oct 2009 B2
7624109 Testardi Nov 2009 B2
7639883 Gill Dec 2009 B2
7653699 Colgrove et al. Jan 2010 B1
7734603 McManis Jun 2010 B1
7788335 Miloushev et al. Aug 2010 B2
7822939 Veprinsky et al. Oct 2010 B1
7831639 Panchbudhe et al. Nov 2010 B1
7870154 Shitomi et al. Jan 2011 B2
7877511 Berger et al. Jan 2011 B1
7885970 Lacapra Feb 2011 B2
7913053 Newland Mar 2011 B1
7953701 Okitsu et al. May 2011 B2
7958347 Ferguson Jun 2011 B1
8005953 Miloushev et al. Aug 2011 B2
20010014891 Hoffert et al. Aug 2001 A1
20010047293 Waller et al. Nov 2001 A1
20010051955 Wong Dec 2001 A1
20020035537 Waller et al. Mar 2002 A1
20020059263 Shima et al. May 2002 A1
20020065810 Bradley May 2002 A1
20020073105 Noguchi et al. Jun 2002 A1
20020083118 Sim Jun 2002 A1
20020120763 Miloushev et al. Aug 2002 A1
20020133330 Loisey et al. Sep 2002 A1
20020133491 Sim et al. Sep 2002 A1
20020138502 Gupta Sep 2002 A1
20020147630 Rose et al. Oct 2002 A1
20020150253 Brezak et al. Oct 2002 A1
20020161911 Pinckney et al. Oct 2002 A1
20020188667 Kirnos Dec 2002 A1
20030009429 Jameson Jan 2003 A1
20030028514 Lord et al. Feb 2003 A1
20030033308 Patel et al. Feb 2003 A1
20030033535 Fisher et al. Feb 2003 A1
20030061240 McCann et al. Mar 2003 A1
20030115218 Bobbitt et al. Jun 2003 A1
20030115439 Mahalingam et al. Jun 2003 A1
20030135514 Patel et al. Jul 2003 A1
20030149781 Yared et al. Aug 2003 A1
20030159072 Bellinger et al. Aug 2003 A1
20030171978 Jenkins et al. Sep 2003 A1
20030177388 Botz et al. Sep 2003 A1
20030204635 Ko et al. Oct 2003 A1
20040003266 Moshir et al. Jan 2004 A1
20040006575 Visharam et al. Jan 2004 A1
20040010654 Yasuda et al. Jan 2004 A1
20040025013 Parker et al. Feb 2004 A1
20040028043 Maveli et al. Feb 2004 A1
20040028063 Roy et al. Feb 2004 A1
20040030857 Krakirian et al. Feb 2004 A1
20040054777 Ackaouy et al. Mar 2004 A1
20040093474 Lin et al. May 2004 A1
20040098383 Tabellion et al. May 2004 A1
20040133573 Miloushev et al. Jul 2004 A1
20040133577 Miloushev et al. Jul 2004 A1
20040133606 Miloushev et al. Jul 2004 A1
20040133607 Miloushev et al. Jul 2004 A1
20040133650 Miloushev et al. Jul 2004 A1
20040139355 Axel et al. Jul 2004 A1
20040148380 Meyer et al. Jul 2004 A1
20040153479 Mikesell et al. Aug 2004 A1
20040181605 Nakatani et al. Sep 2004 A1
20040199547 Winter et al. Oct 2004 A1
20040236798 Srinivasan et al. Nov 2004 A1
20040267830 Wong et al. Dec 2004 A1
20050021615 Arnott et al. Jan 2005 A1
20050050107 Mane et al. Mar 2005 A1
20050091214 Probert et al. Apr 2005 A1
20050108575 Yung May 2005 A1
20050114291 Becker-Szendy et al. May 2005 A1
20050187866 Lee Aug 2005 A1
20050246393 Coates et al. Nov 2005 A1
20050289109 Arrouye et al. Dec 2005 A1
20050289111 Tribble et al. Dec 2005 A1
20060010502 Mimatsu et al. Jan 2006 A1
20060075475 Boulos et al. Apr 2006 A1
20060080353 Miloushev et al. Apr 2006 A1
20060106882 Douceur et al. May 2006 A1
20060112151 Manley et al. May 2006 A1
20060123062 Bobbitt et al. Jun 2006 A1
20060161518 Lacapra Jul 2006 A1
20060167838 Lacapra Jul 2006 A1
20060179261 Rajan Aug 2006 A1
20060184589 Lees et al. Aug 2006 A1
20060190496 Tsunoda Aug 2006 A1
20060200470 Lacapra et al. Sep 2006 A1
20060212746 Amegadzie et al. Sep 2006 A1
20060224687 Popkin et al. Oct 2006 A1
20060230265 Krishna Oct 2006 A1
20060242179 Chen et al. Oct 2006 A1
20060259949 Schaefer et al. Nov 2006 A1
20060271598 Wong et al. Nov 2006 A1
20060277225 Mark et al. Dec 2006 A1
20060282461 Marinescu Dec 2006 A1
20060282471 Mark et al. Dec 2006 A1
20070022121 Bahar et al. Jan 2007 A1
20070024919 Wong et al. Feb 2007 A1
20070027929 Whelan Feb 2007 A1
20070027935 Haselton et al. Feb 2007 A1
20070028068 Golding et al. Feb 2007 A1
20070088702 Fridella et al. Apr 2007 A1
20070098284 Sasaki et al. May 2007 A1
20070136308 Tsirigotis et al. Jun 2007 A1
20070208748 Li Sep 2007 A1
20070209075 Coffman Sep 2007 A1
20070226331 Srinivasan et al. Sep 2007 A1
20080046432 Anderson et al. Feb 2008 A1
20080070575 Claussen et al. Mar 2008 A1
20080104443 Akutsu et al. May 2008 A1
20080208933 Lyon Aug 2008 A1
20080209073 Tang Aug 2008 A1
20080222223 Srinivasan et al. Sep 2008 A1
20080243769 Arbour et al. Oct 2008 A1
20080282047 Arakawa et al. Nov 2008 A1
20090007162 Sheehan Jan 2009 A1
20090037975 Ishikawa et al. Feb 2009 A1
20090041230 Williams Feb 2009 A1
20090055607 Schack et al. Feb 2009 A1
20090077097 Lacapra et al. Mar 2009 A1
20090089344 Brown et al. Apr 2009 A1
20090094252 Wong et al. Apr 2009 A1
20090106255 Lacapra et al. Apr 2009 A1
20090106263 Khalid et al. Apr 2009 A1
20090132616 Winter et al. May 2009 A1
20090204649 Wong et al. Aug 2009 A1
20090204650 Wong et al. Aug 2009 A1
20090204705 Marinov et al. Aug 2009 A1
20090210431 Marinkovic et al. Aug 2009 A1
20090254592 Marinov et al. Oct 2009 A1
20100077294 Watson Mar 2010 A1
20100211547 Kamei et al. Aug 2010 A1
20110087696 Lacapra Apr 2011 A1
Foreign Referenced Citations (13)
Number Date Country
2003300350 Jul 2004 AU
2512312 Jul 2004 CA
0 738 970 Oct 1996 EP
63010250 Jan 1988 JP
06-332782 Dec 1994 JP
08-328760 Dec 1996 JP
08-339355 Dec 1996 JP
9016510 Jan 1997 JP
11282741 Oct 1999 JP
WO 02056181 Jul 2002 WO
WO 2004061605 Jul 2004 WO
WO 2008130983 Oct 2008 WO
WO 2008147973 Dec 2008 WO
Related Publications (1)
Number Date Country
20060200470 A1 Sep 2006 US