1. Field of the Invention
This invention relates generally to storage networks and, more specifically, to a network device on a storage network that coordinates segmentation and reconstruction of a file having a size larger than a native processing capability of a commodity file server into several data chunks within its capability.
2. Description of the Related Art
In a computer network, NAS (Network Attached Storage) file servers connected directly to the network provide an inexpensive and easily configurable solution for a storage network. These NAS file servers are self-sufficient because they contain file systems that allow interoperability with clients running any operating system and communication using open protocols. For example, a Unix-based client can use the NFS (Network File System) protocol by Sun Microsystems, Inc. of Santa Clara, Calif. and a Windows-based client can use CIFS (Common Internet File System) by Microsoft Corp. of Redmond, Wash. to access the same files on a NAS file server. Thus, NAS file servers provide true universal file access.
By contrast, more expensive and powerful SAN (Storage Area Network) file servers use resources connected by Fibre Channel on a back-end, or a dedicated network. A SAN file system is part of the operating system or an application running on the client. But heterogeneous client operating systems may require additional copies of each file to be stored on the storage network to ensure compatibility on the SAN file server. Additionally, communication between clients and file servers on a SAN use proprietary protocols and thus are typically provided by a common vendor. As a result, NAS file servers are preferred when price and ease of use are major considerations. However, the benefits of NAS storage networks over SAN storage networks also have drawbacks.
One drawback with NAS file servers is that there is no centralized control. Accordingly, each client must maintain communication channels between each of the NFS file servers separately. When NAS file servers are either added or removed from the storage network, each client must mount or unmount directories for the associated storage resources as appropriate. This is particularly inefficient when there are changes in hardware, but not in the particular files available on the network, such as when a failing NAS file server is swapped out for an identically configured back-up NAS file server.
Another drawback of NAS file servers is that as commodity devices, they are typically outfitted with 32-bit legacy software and/or hardware that is not capable of processing files greater than 2-GB. For example 32-bit processors and NFS version 2 NAS protocol are limited to 32-bit data capability. Problematically, many data files are larger than 2-GB such as video files and back-up files. Other files start out less than 2-GB, but subsequently grow beyond this size. Additionally, heterogeneous NAS storage networks can contain both enterprise file servers that support 64-bit data and commodity file servers that support 32-bit data that cannot be used to replicate or migrate large data files between each other.
Therefore, what is needed is a robust network device to provide transparency for clients of decentralized file servers such as NAS file servers. Furthermore, the network device should transparently coordinate large file storage on a commodity file server and process access request to the large file.
The present invention meets these needs by provides large file support to a file server in a decentralized storage network. A NAS (Network Attached Storage) switch in the data path of a client and a NAS file server on the storage network, coordinates storage and reconstruction of files larger than the NAS file server can natively process as several data chunks of a size that the NAS file server can process. Advantageously, the client transparently stores large files, such as files greater than 2-GB, on the NAS file server, such as a NAS file server that is natively limited to processing 32-bit data.
An embodiment of a system configured according to the present invention comprises the NAS switch in communication with the client on a front-end of the storage network, and both an enterprise file server and a commodity file server on a back-end. The NAS switch associates NAS file handles (e.g., CIFS file handles or NFS file handles), indicative of an object location on the storage network, with switch file handles that are independent of the object location. The NAS switch then exports the switch file handles to the client. In response to subsequent object access requests from the client, the NAS switch substitutes switch file handles with appropriate NAS file handles for submission to the appropriate NAS file server location.
A segmentation module in the NAS switch stores large files as separate data chunks in the commodity file server. To do so, the segmentation module stores a directory file handle, which points to a directory containing the data chunks, in place of the large file. The segmentation module determines data chunk names by performing calculations that use a file offset and a data chunk size. In one embodiment, the segmentation module stores the directory file handle in a large file handle table that correlates directory file handles for large files with switch file handles. The segmentation module can also store the directory file handle in a file handle migration table that correlates directory file handles and migrated file handles with switch file handles.
A reconstruction module processes client request concerning large files by issuing requests to specific data chunks. The reconstruction module performs calculations to determine which data chunk to process. For example, in a read operation, the reconstruction module calculates a chunk number by dividing a file offset by a present data chunk size.
The present invention provides large file support to a file server in a decentralized storage network. A NAS switch in the data path of a client and a NAS file server on the storage network, coordinates storage of files larger than the NAS file server can natively process as several data chunks of a size that the NAS file server can process. Some embodiments of a system are described with respect to
The accompanying description is for the purpose of providing a thorough explanation with numerous specific details. Of course, the field of storage networking is such that many different variations of the illustrated and described features of the invention are possible. Those skilled in the art will thus undoubtedly appreciate that the invention can be practiced without some specific details described below, and indeed will see that many other variations and embodiments of the invention can be practiced while still satisfying its teachings and spirit. For example, although the present invention is described with reference to storage networks operating under the NAS protocol, it can similarly be embodied in future protocols for decentralized storage networks other than NAS, or in mixed protocol networks. Accordingly, the present invention should not be understood as being limited to the specific implementations described below, but only by the claims that follow.
The processes, features, or functions of the present invention can be implemented by program instructions that execute in an appropriate computing device. Example computing devices include enterprise servers, application servers, workstations, personal computers, network computers, network appliances, personal digital assistants, game consoles, televisions, set-top boxes, premises automation equipment, point-of-sale terminals, automobiles, and personal communications devices. The program instructions can be distributed on a computer readable medium, storage volume, or the Internet. Program instructions can be in any appropriate form, such as source code, object code, or scripts.
The NAS switch 110 provides continuous transparency to the client 140 with respect to object management. Specifically, the NAS switch 110 can off-load tasks related to physical configurations, object management, object migration, object replication, efficient storage and/or other services on the storage network 175. Preferably, the NAS switch 110 emulates file server processes to the client 140 and emulates client processes to the file servers 120, 130. Accordingly, the client 140 is unaware of the NAS switch 110 since the NAS switch 110 is able to redirect NAS requests intended for a large file object on the commodity file server 130 to an appropriate directory on the commodity file server 130 containing data chunks. Thus, the client 140 submits object requests, such as file writes and directory reads, directly to the NAS switch 110. Likewise, the file servers 120, 130 are unaware of the NAS switch 110 since the NAS switch 110 is able to resubmit requests, contained in server file handles, as if they originated from the client 140. To do so, the NAS switch 110 can use mapping, translating, bridging, packet forwarding, other network interface functionality, and other control processes to perform file handle switching, thereby relieving the client 140 of the need to track changes in a file's physical location.
In one embodiment, the NAS switch 110 comprises a client module 112 and a file server module 114 to facilitate communications and file handle switching. The client module 112 receives exported file system directories from the file servers 120, 130 containing NAS switch handles. To create compatibility between the client 140 and the NAS switch 110, the client module 112 maps the file system directories to internal switch file systems which it sends to the client 140. To request an object, the client 140 traverses an exported switch file system and selects a switch file handle which it sends to the NAS switch 110 along with a requested operation.
The file server module 114 coordinates segmentation of large files. The file server module 114 initiates tasks that are passively performed by the commodity file server 130. The file server module 114 segments large files into smaller data chunks and organizes their storage such that the large files can later be retrieved from just the switch file handle received from the client 140. In one embodiment, the file server module 114 migrates and/or replicates 64-bit objects 125 on the enterprise file server 120 to 32-bit objects 135 on the commodity file server 130. During retrieval, the file server module 114 redirects a large file request by the client 140 into several requests to individual data chunks on the commodity file server 130. Thus, large files remain transparently available to the client 140 even when the commodity file server 130 does not natively support software and/or hardware data processing above 32-bits.
In one embodiment, the file server module 114 also tracks reconfigurations resulting from migration, replication and other processes (e.g. adding or removing file server capacity) with a nested system of tables, or information otherwise linked to the switch file systems. The switch file handles are static as they are persistent through replications, but the associated NAS file handles can be dynamic as they are selected depending upon which particular copy is being accessed. To track various copies of an object, the file server module 114 maintains a file handle replication table, corresponding to each file system, that maps NAS file handles of replicated objects to locations on the storage network 175 and to status information about the replication locations. Further embodiments of the file server module 114 are described with respect to
In general, NAS file handles uniquely identify objects on the file servers 120, 130, such as a directory or file, as long as that object exists. NAS file handles are file server specific, and are valid only to the file servers 120, 130 that issued the file handles. The process of obtaining a NAS file handle from a file name is called a look-up. A NAS file handle, which identifies a directory or file object by location, may be formatted according to protocols such as NFS or CIFS as discussed in further detail below, e.g., with reference to Tables 1A and 1B. By contrast, a switch file handle identifies a directory or file object independent of location, making it persistent through file replications, migrations, and other data transfers. The switch file handle can be a modified NAS file handle that refers to an internal system within the NAS switch 110 rather than the commodity file server 130. A stored file handle is stored in place of a migrated or to be replicated object as a pointer to an alternate location.
Object access requests handled by the NAS switch 110 include, for example, directory and/or file reads, writes, creation, deletion, moving, and copying. As used herein, various terms are used synonymously to refer to a location of an object prior to replication (e.g., “primary”; “source”; “original”; and “first”) and various terms are used to refer to a location of the same object after migration (e.g., “replica”; “destination”; “substitute”; and “second”). Further embodiments of the NAS switch 110 and methods operating therein are described below.
The client 140 accesses resources on the file servers 120, 130 by using a switch file handle submitted to the NAS switch 110. To access an object, the client 140 first mounts an exported file system preferably containing switch file handles. The client 140 looks-up an object to obtain its file handle and submits an associated request. From the perspective of the client 140, transactions are carried out by a file server 120, 130 having object locations that do not change. Thus, the client 140 interacts with the NAS switch 110 before and after a large file segmentation in the same manner. A user of the client 140 can submit operations through a command line interface, a windows environment, a software application, or otherwise. In one embodiment, the client 140 provides access to a storage network 175 other than a NAS storage network.
The enterprise file server 120 further comprises 64-bit objects 125 including data such as enterprise data, records, database information, applications, and the like. The enterprise file server 120 comprises the software and hardware resources necessary for 64-bit data support to files greater than 2-GB. For example, the enterprise file server 120 can comprise a 64-bit processor and a NAS file server version suitable for 64-bit support. In one embodiment, the NAS switch 110 stores large files in smaller data chunks to improve data manageability even though the large files are within the processing capability of the enterprise file server.
The commodity file server 130 further comprises 32-bit objects 135 including 32-bit data. The commodity file server 130 is limited to native 32-bit data support by one or more software or hardware resources such as a 32-bit processor or NFS version 2. Both the enterprise and commodity file servers 120, 130 also preferably comprise a file system compatible with NAS protocols. In one embodiment, the file servers 120, 130 comprise a decentralized file servers, or file servers that otherwise do not natively support coordinated services.
The network 195 facilitates data transfers between connected hosts (e.g., 110, 140). The connections to the network 195 may be wired and/or wireless, packet and/or circuit switched, and use network protocols such as TCP/IP (Transmission Control Protocol/Internet Protocol), IEEE (Institute of Electrical and Electronics Engineers) 802.11, IEEE 802.3 (i.e., Ethernet), ATM (Asynchronous Transfer Mode), or the like. The network 195 comprises, for example, a LAN (Local Area Network), WAN (Wide Area Network), the Internet, and the like. In one embodiment, the NAS switch 110 acts as a gateway between the client 140, connected to the Internet, and the directory file server 120, and the shadow file servers 130, connected to a LAN. The sub-network 196 is preferably a local area network providing optimal response time to the NAS switch 110. In one embodiment, the sub-network 196 is integrated into the network 195.
The file server interface 210 receives a switch file handle with a request from the client 140 which it uses to form a NAS file handle with a request to the commodity file server 130. If the request involves a large file, the file server interface 210 receives a directory file handle and one or more data chunk file handles from the segmentation module 220. If the request does not involve a large file handle, the file server interface 210 can use an original NAS file handle to access the data.
The segmentation module 220 receives an input request from the file server interface 210 from which it can form requests to the directory containing related data chunks. The segmentation module 220 separates large files into several data chunks. The segregation module 220 tracks a current file offset and uses a preset data chunk size to coordinate between the incoming large file and outgoing data chunks. For example, by dividing the file offset by the preset data chunk size yields a current data chunk, the segmentation module 220 calculates a next data chunk to create. The segmentation module 220 updates, for example, a data chunk number, a file offset, and the like, while segmenting the large file. Additionally, the segregation module 220 stores file handles based on an emulated storage location of the large file and a storage location of the directory in the commodity file server 130 containing the data chunks. In one embodiment, these file handles are stored in an entry of a file handle migration table that also stores migrated file handles. Thus, the large file service can be easily integrated into the NAS switch 110 if it already supports file migration services. In another embodiment, these file handles are stored in an entry of a dedicated large file table.
In one embodiment, the segmentation module 220 also assists the file server module 114 in migrating or replicating 64-bit objects 125 on the enterprise file server 120 to 32-bit objects 135 on the commodity file server 130. In file migration, the file server module 210 maintains a file handle migration table in which it correlates source (or original) NAS file handles on the enterprise file server 120 with destination (or new) file locations on the commodity file server 130. The segregation module 220 breaks up the large file into data chunks stored in a directory. Thus, the destination file handle refers to the directory containing related data chunks. In file replication, the file server module 210 maintains a file handle replication table in which it correlates primary (or original) file handles on the enterprise file server 120 with replica (or new) file handles on the commodity file server 130. Thus, the replica file handle refers to the directory containing related data chunks. Given the above disclosure, note that the segregation module 220 can assist additional services of the NAS switch 110.
The reconstruction module 230 receives an output request from the client 140 from the file server interface 112 from which it forms one or more output requests for data chunks on the commodity file server 130. Example requests include read, write, truncate, and the like. In one embodiment, the reconstruction module 230 can recognize large file accesses by looking-up the switch file handle received from the client 140 to perform a look-up in the large file table for an associated directory file handle. In another embodiment, the reconstruction module 230 looks-up the switch file handle in the file migration table for a destination file handle. If there is a hit in the file handle migration table, the reconstruction module 230 next checks the size of an object referred to by the destination file handle. If the object matches a size of a stored directory file handle, the switch file handle very likely corresponds to a large file. The reconstruction module 230 can take additional steps to verify that the directory file handle points to a large file directory by confirming that the directory contains data chunks.
Before forming requests to the commodity file server 130, the reconstruction module 230 determines which data chunk to access. In one embodiment, the reconstruction module 230 uses the same or reverse calculations used by the segmentation module 220 so that given the same inputs (e.g., file name, file offset, preset data chunk size), the reconstruction module can calculate which data chunk to access. For example, the reconstruction module 230 finds a data chunk to read or write to by dividing a file offset by a preset data chunk size. The reconstruction module 230 updates, for example, a data chunk number, a file offset, a number of bytes read, a number of bytes remaining, and the like, while processing a large file request. Additional embodiments of the file server module 114 and methods operating therein are described below.
The segmentation module 220 segments 320 large files into data chunks as described below with respect to
The reconstruction module 230 processes 330 requests to access large files as described below with respect to
The client module 112 generates 420 switch file handles independent of object locations in the commodity file server 130. The client module 112 organizes exported file systems from the file server 130 by replacing file system or tree identifiers with a switch file system number as shown below in Tables 2A and 2B. The client module 112 exports 430 the switch file system to the client 140 to use in requesting operations. In the reverse process, the NAS switch 110 receives the NAS request and looks-up the switch file handle to obtain an appropriate NAS file handle. Accordingly, the reconstruction module 230 returns a directory file handle if the switch file handle refers to a large file. The file server interface 210 generates a NAS file handle from the switch file handle based on an object location. An example of the contents of an NFS and CIFS file handle are shown in Tables 1A and 1B, while an example of switch file handles or modified NFS and CIFS file handles are shown in Tables 2A and 2B:
As discussed below, after objects have been segmented, the NAS switch 110 can access objects at new locations using updated NAS file handle.
The segmentation module 220 creates 520 data chunks in the directory based on a chunk number and a file offset. Preferably, the data chunks have a common preset size such as 1-GB. According to the 1-GB example, the first data chunk contains the data at an offset 0, the second data chunk contains the data starting at an offset of 1-GB, the third data chunk contains the data starting at an offset of 2-GB, and so on. In one embodiment, the data chunk number determines a data chunk name. Thus, the file name for the first data chunk is 0, the name for the second data chunk is 1, the name for the third data chunk is 2, and so on. Of course, there can be variations of the data chunk naming convention. After creating a first file, if there is more data 530, the segmentation module 220 increments the data chunk number 532, increments the data chunk offset 534, and continues creating 520 data chunks until all the data has been stored 530.
In one embodiment, a data chunk can be missing. The segmentation module 220 recognizes missing data chunks as holey files. A holey file comprises holes, or a range of consecutive values. When the segmentation module 220 identifies a missing data chunk, it substitutes the corresponding range with zeroes to fill the hole during output.
In the embodiment of
In one embodiment, the stored file handle is similar to a non-stored file handle. Preferably, the stored file handle is a predetermined size so that the NAS switch 110 can determine whether a file contains a stored file handle merely by inspecting the file's size. An exemplary stored file format is shown in Table 3:
Note, however, that there can be variations of the stored file format.
If a matching entry is found, the reconstruction module 230 next determines whether the destination file handle refers to a directory file handle 620. Since stored file handles preferably have a common file size, the reconstruction module 230 can quickly make this determination. If the destination file handle is not a stored file handle, the client request does not concern a large file.
If the request concerns a large file, the reconstruction module 230 determines 630 the data chunk number to process. Generally, the reconstruction module 230 determines 630 the chunk number by dividing a file offset into a data chunk size. The reconstruction module 230 determines 640 a chunk offset by subtracting a product of the data chunk number and the preset size of a data chunk from the file offset. The reconstruction module 230 processes 650 the data chunk with requests such as read, write, and the like. The process continues through each data chunk 660.
For example, to read a large file, the reconstruction module 230 receives the switch file handle, an input offset indicating where to start a read operation, and an input number of bytes to read. The reconstruction module 230 sets a number of bytes read to zero, a number of remaining bytes to be read as the input number of bytes to be read, and sets the offset to the input offset. From this information, the reconstruction module 230 computes the data chunk number, the data chunk offset, and the size of data in the current data chunk (if different from present data chunk value). As discussed, the chunk number is the input offset divided by the preset size of the data chunk. The chunk offset is the product of chunk number and the preset size of the data chunk subtracted from the input offset. Some data chunks, such as the last data chunk, may be smaller than the preset data chunk size. If the remaining bytes to be read is less than or equal to the preset chunk size, the chunk size is set to the remaining bytes to be read. In one embodiment, the chunk sizes are kept consistent by padding the end of a data chunk that is not full with data. The reconstruction module 230 continues the read process by incrementing the number of bytes read by the data chunk size, decrements the remaining bytes to be read by the data chunk size, increments the current chunk by 1, and increments the file offset by the data chunk size. The process of writing a file is similar.
To delete a large file, the reconstruction module 230 removes the destination file handle, the data chunks, the directory, and the file handle migration table entry.
To move a large file, the reconstruction module 230 can update the directory file handle.
To truncate a large file, the reconstruction module 230 computes a data chunk number and a data chunk length. The reconstruction module 230 sets the data chunk number to a new length divided by the preset size of data chunks. The chunk length is the product of the data chunk number and the preset data chunk size subtracted from the new data chunk length. The reconstruction module 230 creates a new file to store the data chunk if the data chunk number does not exist. The data chunk file length is the preset data chunk size or smaller if the last data chunk.
To set a new large file size, the reconstruction module 230 calculates a number of necessary data chunks. If the last data chunk number is smaller than the calculation, the reconstruction module 230 creates the necessary data chunks in the directory.
To determine a large file size attribute, the reconstruction module 230 identifies the last data chunk and multiplies a number of the last data chunk with the preset data chunk size. If the last data chunk is smaller than the preset size, the last data chunk size is substituted in the total.
Other data operations not herein described will be apparent give the above disclosure.
This application claims priority under 35 U.S.C. §119(e) to: U.S. Provisional Patent Application No. 60/478,154, filed on Apr. 24, 2003, entitled “Method and Apparatus to Provide Large File Support to a Network File Server,” by Thomas K. Wong et al.; U.S. Provisional Patent Application No. 60/465,579, filed on Apr. 24, 2003, entitled “Method and Apparatus for Transparent File Migration Using the Technique of Namespace Replication,” by Thomas K. Wong et al.; U.S. Provisional Patent Application No. 60/465,578, filed on Apr. 24, 2003, entitled “Method and Apparatus for Transparent File Replication Using the Technique of Namespace Replication,” by Thomas K. Wong et al.; and is related to U.S. patent application Ser. No. 10/831,376, filed on Apr. 23, 2004, entitled “Transparent File Migration Using Namespace Replication,” by Thomas K. Wong et al.; and U.S. patent application Ser. No. 10/831,701, filed on Apr. 23, 2004, entitled “Transparent File Replication Using Namespace Replication,” by Thomas K. Wong et al., each of which applications are herein incorporated by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
5689701 | Ault et al. | Nov 1997 | A |
5774715 | Madany et al. | Jun 1998 | A |
5890169 | Wong et al. | Mar 1999 | A |
5933825 | McClaughry et al. | Aug 1999 | A |
6101508 | Wolff | Aug 2000 | A |
6192408 | Vahalia et al. | Feb 2001 | B1 |
6314460 | Knight et al. | Nov 2001 | B1 |
6353837 | Blumenau | Mar 2002 | B1 |
6389427 | Faulkner | May 2002 | B1 |
6408298 | Van et al. | Jun 2002 | B1 |
6442548 | Balabine et al. | Aug 2002 | B1 |
6453354 | Jiang et al. | Sep 2002 | B1 |
6473401 | Kong et al. | Oct 2002 | B1 |
6606690 | Padovano | Aug 2003 | B2 |
6615365 | Jenevein et al. | Sep 2003 | B1 |
6633887 | Suzuki et al. | Oct 2003 | B2 |
6697846 | Soltis | Feb 2004 | B1 |
6711625 | Simpson | Mar 2004 | B1 |
6738883 | March et al. | May 2004 | B2 |
6931410 | Anderson et al. | Aug 2005 | B2 |
6938039 | Bober et al. | Aug 2005 | B1 |
6983379 | Spalink et al. | Jan 2006 | B1 |
6985956 | Luke et al. | Jan 2006 | B2 |
6996714 | Halasz et al. | Feb 2006 | B1 |
7054927 | Ulrich et al. | May 2006 | B2 |
7072917 | Wong et al. | Jul 2006 | B2 |
7089293 | Grosner et al. | Aug 2006 | B2 |
7096253 | Vinson et al. | Aug 2006 | B2 |
7120666 | McCanne et al. | Oct 2006 | B2 |
7127477 | Duncombe et al. | Oct 2006 | B2 |
7272613 | Sim et al. | Sep 2007 | B2 |
7272654 | Brendel | Sep 2007 | B1 |
7308709 | Brezak et al. | Dec 2007 | B1 |
7313579 | Murotani | Dec 2007 | B2 |
7346664 | Wong et al. | Mar 2008 | B2 |
7441011 | Lin et al. | Oct 2008 | B2 |
7475142 | Sharma et al. | Jan 2009 | B2 |
7512673 | Miloushev et al. | Mar 2009 | B2 |
7587422 | Wong et al. | Sep 2009 | B2 |
7720796 | Wong et al. | May 2010 | B2 |
20020013832 | Hubbard | Jan 2002 | A1 |
20020111929 | Pudipeddi et al. | Aug 2002 | A1 |
20020120763 | Miloushev et al. | Aug 2002 | A1 |
20020133491 | Sim et al. | Sep 2002 | A1 |
20020154645 | Hu et al. | Oct 2002 | A1 |
20020161855 | Manczak et al. | Oct 2002 | A1 |
20020199060 | Peters et al. | Dec 2002 | A1 |
20030037061 | Sastri et al. | Feb 2003 | A1 |
20030046270 | Leung et al. | Mar 2003 | A1 |
20030046335 | Doyle et al. | Mar 2003 | A1 |
20030056112 | Vinson et al. | Mar 2003 | A1 |
20030110263 | Shillo | Jun 2003 | A1 |
20030126247 | Strasser et al. | Jul 2003 | A1 |
20030140051 | Fujiwara et al. | Jul 2003 | A1 |
20030154236 | Dar et al. | Aug 2003 | A1 |
20030177178 | Jones et al. | Sep 2003 | A1 |
20030182313 | Federwisch et al. | Sep 2003 | A1 |
20030195903 | Manley et al. | Oct 2003 | A1 |
20030204613 | Hudson et al. | Oct 2003 | A1 |
20030204670 | Holt et al. | Oct 2003 | A1 |
20030220985 | Kawamoto et al. | Nov 2003 | A1 |
20040010714 | Stewart | Jan 2004 | A1 |
20040024963 | Talagala et al. | Feb 2004 | A1 |
20040054748 | Ackaouy et al. | Mar 2004 | A1 |
20040078465 | Coates et al. | Apr 2004 | A1 |
20040088297 | Coates et al. | May 2004 | A1 |
20040103104 | Hara et al. | May 2004 | A1 |
20040117438 | Considine et al. | Jun 2004 | A1 |
20040133577 | Miloushev et al. | Jul 2004 | A1 |
20040133606 | Miloushev et al. | Jul 2004 | A1 |
20040133650 | Miloushev et al. | Jul 2004 | A1 |
20040133652 | Miloushev et al. | Jul 2004 | A1 |
20040139167 | Edsall et al. | Jul 2004 | A1 |
20040153481 | Talluri | Aug 2004 | A1 |
20040267752 | Wong et al. | Dec 2004 | A1 |
20040267831 | Wong et al. | Dec 2004 | A1 |
20050033932 | Pudipeddi et al. | Feb 2005 | A1 |
20050055402 | Sato | Mar 2005 | A1 |
20050125503 | Iyengar | Jun 2005 | A1 |
20050188211 | Scott et al. | Aug 2005 | A1 |
20050198062 | Shapiro | Sep 2005 | A1 |
20050262102 | Anderson et al. | Nov 2005 | A1 |
20060080371 | Wong et al. | Apr 2006 | A1 |
20060161746 | Wong et al. | Jul 2006 | A1 |
20060271598 | Wong et al. | Nov 2006 | A1 |
20070024919 | Wong et al. | Feb 2007 | A1 |
20070136308 | Tsirigotis et al. | Jun 2007 | A1 |
20080114854 | Wong et al. | May 2008 | A1 |
Number | Date | Country |
---|---|---|
0 926 585 | Jun 1999 | EP |
1 209 556 | May 2002 | EP |
2005502096 | Jan 2005 | JP |
2004097571 | Apr 2004 | WO |
2004097686 | Apr 2004 | WO |
WO2004053677 | Jun 2004 | WO |
2004097572 | Nov 2004 | WO |
2004097624 | Nov 2004 | WO |
2005029251 | Mar 2005 | WO |
2006039689 | Apr 2006 | WO |
2006080371 | Aug 2006 | WO |
2007002855 | Jan 2007 | WO |
2007041456 | Apr 2007 | WO |
Number | Date | Country | |
---|---|---|---|
20040267831 A1 | Dec 2004 | US |
Number | Date | Country | |
---|---|---|---|
60478154 | Apr 2003 | US | |
60465579 | Apr 2003 | US | |
60465578 | Apr 2003 | US |