1. Field of the Invention
This invention relates generally to storage networks and, more specifically, to a network device that uses mirroring when servicing file servers in a decentralized storage network.
2. Description of Related Art
In a computer network, NAS (Network Attached Storage) file servers connected directly to the network provide an inexpensive and easily configurable solution for a storage network. These NAS file servers are self-sufficient because they contain file systems that allow interoperability with clients running any operating system and communication using open protocols. For example, a Unix-based client can use the NFS (Network File System) protocol by Sun Microsystems, Inc. of Santa Clara, Calif. and a Windows-based client can use CIFS (Common Internet File System) by Microsoft Corp. of Redmond, Wash. to access files on a NAS file server. However, the operating system does not affect communication between the client and file server. Thus, NAS file servers provide true universal file access.
By contrast, more expensive and powerful SAN (Storage Area Network) file servers use resources connected by Fibre Channel on a back-end, or a dedicated network. A SAN file system is part of the operating system or an application running on the client. But heterogeneous client operating systems may require additional copies of each file to be stored on the storage network to ensure compatibility on the SAN file server. Additionally, communication between file servers on a SAN use proprietary protocols and thus are typically provided by a common vendor. As a result, NAS file servers are preferred when price and ease of use are major considerations. However, the benefits of NAS storage networks over SAN storage networks also have drawbacks.
One drawback with NAS file servers is that there is no centralized control. Accordingly, each client must maintain communication channels between each of the NFS file servers separately. When NAS file servers are either added or removed from the storage network, each client must mount or unmount directories for the associated storage resources as appropriate. This is particularly inefficient when there are changes in hardware, but not in the particular files available on the network, such as when a failing NAS file server is swapped out for an identically configured back-up NAS file server.
A related drawback is that a client must be reconfigured each time a file is relocated within the storage network, such as during file migration or file replication. The client generates a NAS file handle that identifies a physical location of the directory or file object on the file server. To access the object, the client sends an object access request directly to the NAS file server. When the file is relocated to a different NAS file server, subsequent requests for access to the file require a new look-up to locate the file and generate a new NAS file handle.
An additional drawback is that NAS file servers are inaccessible during large data transfer operations such as file migrations and replications. Such data transfers typically occur during non-business hours to reduce consequential downtime. However, ever-larger storage capacities increase the amount of time necessary for data transfers. Additionally, many enterprises and applications have a need for data that is always available.
Therefore, what is needed is a network device to provide transparency to clients of file servers such as NAS file servers. Furthermore, there is a need for the network device to allow file migration and replications to occur without the need for client reconfiguration. Moreover, there is a need for the network device to provide data integrity during file migration and replications.
The present invention meets these needs by providing mirroring in a decentralized storage network that is transparent to a client. A NAS switch, in the data path of a client and NAS file servers, reliably coordinates file migration of a source file server to a destination file server using namespace replication to track new file locations, file replications, snapshot services, and the like. Additionally, the NAS switch maintains data availability during time-consuming data transfers.
An embodiment of a system configured according to the present invention comprises the NAS switch in communication with the client on a front-end of the storage network, and both a source file server and a destination file server on a back-end. The NAS switch associates NAS file handles (e.g., CIFS file handles or NFS file handles) received from the source and destination file servers with switch file handles that are independent of a location. The NAS switch then exports switch file handles to the client. In response to subsequent object access requests from the client, the NAS switch substitutes switch file handles with appropriate NAS file handles for submission to the appropriate NAS file server.
In another embodiment, the NAS switch further comprises a migration module to coordinate the migration of source files at locations on the source file server to destination files at locations on the destination file server. The migration module separately performs namespace replication and data replication. Namespace replication copies the namespace of the directory hierarchy on the source file server to the destination file server. Namespace replication can also include the use of stored file handles as pointers from the source file server to files migrated to the destination file server, or as pointers from the destination file server to files yet to be migrated from the source file server. In one embodiment, the migration module mirrors the namespace during migration to preserve data integrity. Next, the migration module migrates the data and swaps stored file handles.
In yet another embodiment, the migration module updates a file migration table upon successful migration of an object. Accordingly, the migration module enters the location of the object on the source file server and the location of the object on the destination file server. When an object access request is received, the NAS switch searches the file migration table according to the switch file handle. If a there is a match, the NAS switch sends the object access request to the location on the destination file server. Otherwise, the NAS switch sends the object access request to the location on the source file server. Advantageously, the migration module provides migration services to decentralized file servers and file servers that do not otherwise natively support migration.
In still another embodiment, during migration, the migration module duplicates requests (e.g., modification requests) to both the namespace on the source file server and the replicated namespace on the destination file server. In another embodiment, during replication, a replication module duplicates requests after the copying in order to maintain namespace mirroring.
The present invention provides mirroring in a storage network that is transparent to the clients. A NAS (Network Attached Storage) switch in the data path of a client and NAS file servers on the storage network, uses namespace replication to coordinate, e.g., file migration and file replication between decentralized servers, and snapshots, while maintaining data availability to a client. Mirroring can ensure file integrity during namespace replication and data replication. Some embodiments of a system are described with respect to
The accompanying description is for the purpose of providing a thorough explanation with numerous specific details. Of course, the field of storage networking is such that many different variations of the illustrated and described features of the invention are possible. Those skilled in the art will thus undoubtedly appreciate that the invention can be practiced without some specific details described below, and indeed will see that many other variations and embodiments of the invention can be practiced while still satisfying its teachings and spirit. For example, although the present invention is described with reference to storage networks operating under the NAS protocol, it can similarly be embodied in future protocols for decentralized storage networks other than NAS, or in mixed protocol networks. Accordingly, the present invention should not be understood as being limited to the specific implementations described below, but only by the claims that follow.
The processes, features, or functions of the present invention can be implemented by program instructions that execute in an appropriate computing device. Example computing devices include enterprise servers, application servers, workstations, personal computers, network computers, network appliances, personal digital assistants, game consoles, televisions, set-top boxes, premises automation equipment, point-of-sale terminals, automobiles, and personal communications devices. The program instructions can be distributed on a computer readable medium, storage volume, or the Internet. Program instructions can be in any appropriate form, such as source code, object code, or scripts.
The NAS switch 110 provides continuous transparency to the client 140 with respect to object management. Specifically, the NAS switch can off-load tasks related to physical configurations, object management, object migration, object replication, efficient storage and/or other services on the storage network 175. Preferably, the NAS switch 110 emulates file server processes to the client 140 and emulates client processes to the file servers 120, 130. Accordingly, the client 140 is unaware of the NAS switch 110 since the NAS switch 110 is able to redirect NAS requests intended for the source file server 120 to appropriate locations on the destination file server 130. Thus, the client 140 submits object requests, such as file writes and directory reads, directly to the NAS switch 110. Likewise, the file servers 120, 130 are unaware of the NAS switch 110 since the NAS switch 110 is able to resubmit requests, contained in server file handles, as if they originated from the client 140. To do so, the NAS switch 110 can use mapping, translating, bridging, packet forwarding, other network interface functionality, and other control processes to perform file handle switching, thereby relieving the client 140 of the need to track changes in a file's physical location.
In one embodiment, the NAS switch 110 comprises a client module 112 and a file server module 114 to facilitate communications and file handle switching. The client module 112 receives exported file system directories from the file servers 120, 130 containing NAS switch handles. To create compatibility between the client 140 and the NAS switch 110, the client module 112 maps the file system directories to internal switch file systems which it sends to the client 140. To request an object, the client 140 traverses an exported switch file system and selects a switch file handle which it sends to the NAS switch 110 along with a requested operation.
The file server module 114 coordinates migration processes. The file server module 114 initiates tasks that are passively performed by the source and destination file server 120, 130, which may not have native migration capabilities. The file server module 114 replicates a namespace containing the data to be migrated from the source file server 120 to the destination file server 130, and then replicates associated data. During and afterwards, the file server module 114 redirects namespace and file object accesses request by the client 140 to appropriate locations. Thus, data transfer services remain available to the client 140.
In one embodiment, the file server module 114 also tracks reconfigurations resulting from migration, replication and other object relocation processes (e.g. adding or removing file server capacity) with a nested system of tables, or information otherwise linked to the switch file systems. The switch file handles are static as they are persistent through the relocation processes, but the associated NAS file handles can be dynamic as they are selected depending upon an object's current location. To track various copies of an object, the file server module 114 maintains a file handle migration table and a file handle replication table corresponding to each file system that maps NAS file handles of migrated and replicated objects to locations on the storage network 175. Further embodiments of the file server module 114 are described with respect to
The client module 112 associates 310 a NAS file handle with a switch file handle as described below with respect to
In general, NAS file handles uniquely identify objects, such as a directory file server, on the file servers 120, 130, such as a directory or file, as long as that object exists. NAS file handles are file server specific, and are valid only to the file servers 120, 130 that issued the file handles. The process of obtaining a file handle from a file name is called a look-up. The NAS file handle may be formatted according to protocols such as NFS or CIFS as discussed in further detail below, e.g., with reference to Tables 1A and 1B. By contrast, a switch file handle identifies a directory or file object independent of location, making it persistent through file replications, migrations, and other data transfers. The switch file handle can be a modified NAS file handle that refers to an internal system within the NAS switch 110 rather than the source file server 120. This enables the NAS switch 110 in mapping persistent file handles to a choice of alternative NAS file handles. An original NAS file handle refers to an initial object location on the source file server 120. A stored NAS file handle refers to a NAS file handle, stored as an object on the file servers 120, 130, which points to an alternative file location.
Object access requests handled by the NAS switch 110 include, for example, directory and/or file reads, writes, creation, deletion, moving, and copying. A namespace access refers to an operation accessing or modifying the namespace such as look-up, rename, delete, or create. A file access refers to an operation accessing or modifying files such as read or write. An object can refer to a directory object or a file object. Directory objects can further comprise sub-directories and file objects within directory. As used herein, various terms are used synonymously to refer to a location of an object prior to migration (e.g., “primary”; “source”; “original”; and “first”) and various terms are used to refer to a location of the same object after migration (e.g., “replica”; “destination”; “substitute”; and “second”). Further embodiments of the NAS switch 110 and methods operating therein are described below.
The client 140 accesses resources on the file servers 120, 130 by submitting a switch file handle to the NAS switch 110, intended for the source file server 120. To find the switch handle, the client 140 first mounts an exported switch file system containing switch file handles. The client 140 looks-up an object to obtain its file handle and submits an associated request. From the perspective of the client 140, transactions are carried out by the file servers 120, 130 having object locations that do not change. Thus, the client 140 interacts with the NAS switch 110 before and after a file replication in the same manner. A user of the client 140 can submit operations through a command line interface, a windows environment, a software application, or otherwise. In one embodiment, the NAS switch 110 further provides access to a storage network 175 other than a NAS storage network.
The source file server 120 is the default or original network file server for the client 140 before file migration. The source file server 120 further comprises source objects 125, which include namespace directories and files such as enterprise data, records, database information, applications, and the like. The source file server 120 can store a table of migrated directories maintained by the NAS switch 110 that correlate results from namespace migration. Moreover, the source file server 120 can store a file handle migration table, maintained by the NAS switch 110, denoting each migrated directory and file object. The source file server 120 comprises, for example, a personal computer using an x86-type processor with an operating system and/or an application, a workstation, a specialized NAS device with an optimized operating system and/or application, a modified server blade, etc.
The destination file server 130 becomes the primary network file server used by the NAS switch 110 after file migration. The destination file server 130 further comprises destination objects 135, which include the replicated namespace directories and source files. The destination file server 130 can comprise the same hardware and/or software as described with reference to the source file server 120. The source and destination file servers 120, 130 are preferably NAS file server, but can also be file servers using other decentralized protocols that do not inherently support file migration. Further embodiments of the source and destination file servers 120, 130 and related methods are described below.
The network 195 facilitates data transfers between connected hosts (e.g., 110, 140). The connections to the network 195 may be wired and/or wireless, packet and/or circuit switched, and use network protocols such as TCP/IP (Transmission Control Protocol/Internet Protocol), IEEE (Institute of Electrical and Electronics Engineers) 802.11, IEEE 802.3 (i.e., Ethernet), ATM (Asynchronous Transfer Mode), or the like. The network, 195 comprises, for example, a LAN (Local Area Network), WAN (Wide Area Network), the Internet, and the like. In one embodiment, the NAS switch 110 acts as a gateway between the client 140, connected to the Internet, and the directory file server 120, and the shadow file servers 130, connected to a LAN. The sub-network 196 is preferably a local area network providing optimal response time to the NAS switch 110. In one embodiment, the sub-network 196 is integrated into the network 195.
Prior to file migration, the file server interface 210 receives a switch file handle with a request from the client 140 which it uses to find an original NAS file handle. The file server interface 210 submits the original NAS file handle with the request to the source file server 120. If the object has yet to change locations in the storage network 175, the file server interface 210 uses the original NAS file handle. The file server interface 210 can submit the switch file handle to the migration module 220 to determine if the object is part of a data migration. Also, the file server interface 220 can submit the switch file handle to the redirection module 230 to determine if the object has completed data migration. In either case, an appropriate NAS file handle is returned for the file server interface 210 to use in forwarding the client request to the appropriate file server 120, 130
During file migration, a migration module 220 in the NAS switch 110 coordinates migration from the source file server 120 to the destination file server 130 using namespace replication. Namespace replication copies directory metadata of the source file server 120 separately from the data itself. Because the namespace replication is many times faster than the data migration, directory services remain available even while the data migration occurs. The migration module 220, in one embodiment, mirrors the original and replicated namespace to maintain integrity during migration. The migration module 220 can use a file handle migration table (or a file location table) to track mirrored objects by changing a state to “mirrored.” Once the migration of namespace and data has completed, the migration module 220 updates the file handle migration table by changing the state “mirrored” to “migrated.”
After file migration, the redirection module 230 looks-up switch file handles received from the client 140 in the file handle migration table. IF an object has been migrated, the redirection module outputs a destination NAS file handle corresponding to a location on the destination file server 130.
The migration module 220 performs 320 file migration using namespace replication as described below with respect to
The redirection module 230 redirects 330 NAS requests concerning migrated files as described below with respect to
The client module 112 generates 420 switch file handles independent of object locations in the primary file server 120. The client module 112 organizes exported file systems from the file server 120 by replacing file system or tree identifiers with a switch file system number as shown below in Tables 2A and 2B. The client module 112 exports 430 the switch file system to the client 140 to use in requesting operations. In the reverse process, the NAS switch 110 receives the NAS request and searches replicated file handles and/or replicated namespaces using the NAS file handle. Accordingly, the file server interface 210 checks entries of nested tables maintained by the synchronization module 230. The file server interface 210 generates a NAS file handle from the switch file handle based on an object location. An example of the contents of an NFS and CIFS file handle are shown in Tables 1A 1B, while an example of switch file handles or modified NFS and CIFS file handles are shown in Tables 2A and 2B:
As discussed below, after objects have been migrated, the NAS switch 110 can accesses objects at new locations using updated NAS file handle.
In one embodiment, if a critical directory request is issued to the source file server 520 during file migration 510, the migration module 220 resubmits 530 the request to update the replicated namespace. In other embodiments, the copied object can be deleted and then recopied so that the copied object reflects any modifications. Preferably, the replicated namespace is stored on the destination file server 130. As a result, when critical operations such as a create directory, create file, delete, directory, delete file, and the like affect the source namespace, that same modification is made to the replicated namespace.
In one embodiment, the migration module 220 serializes critical directory requests in order to maintain the mirror. If the source file server 120 executes a series of modifications in a different order, an object and its replicated object can arrive at different states, and the mirror will be invalid. Therefore, the migration module 220 ensures that the requests are executed in the same order. For example, the NAS switch 110 can receive two requests at the same time, such as Request A to write the text “rainy” at the beginning of the file “report.txt”, and Request B to write the text “sunny” at the beginning of the file “report.txt.” Also, a single request can be broken up into two separate requests (e.g., a move request involves a delete request and a create request). The requests can be queued into separate queues corresponding to the source and destination servers 120, 130 which operate under a set of rules. More specifically, requests at the front of the queues (e.g., first in first out queues) can be issued. A request and its mirrored counterpart are removed from the front of the queues only when replies have been received by the NAS switch 110. One of the replies is forwarded to the client 140 and the other reply can be discarded after being examined. Then, the next request can be issued from the queues. In some cases, on server has successfully executed the request while the other may have failed (e.g., due to lack of disk space). Differences in the replies can signal that the mirror has an error. In response, the migration module 220 can break (or abort) the mirror to clean up the destination export.
Once the directory replication is complete 540, critical directory operations can be submitted directly to the replicated namespace. In a separate process, the migration module 220 copies 550 data from the source file server 120 to the destination file server 130. The objects involved in data copying can also be mirrored during migration.
As used during migration, mirroring provides duplicate sets of migrated namespace and data in case of failure. In another example, mirroring during and after replication provides full functionality from original and replicated data. In still another example, mirroring prior to migration or replication provides a snapshot of a file system at a particular instance.
If the current object is a directory 640, the migration module 220 creates 650 a directory in the destination file server 130 with the same name as the current directory in the primary file server 120 (e.g., using the MkMirror function). On the other hand, if the current object is a file 640, the reproduction module 220 creates 645 a file with a stored file handle for the object from the file handle in the current destination directory. In one embodiment, the stored file handle is similar to the switch file handle. Preferably, the stored file handle is a predetermined size so that the NAS switch 110 can determine whether a file contains a stored file handle merely by inspecting the file's size. An exemplary stored file format is shown in Table 3:
Note, however, that there can be variations of the stored file format. The migration module 220 adds 655 a mapping entry in a replicated file list with source and destination switch file handles.
If all objects have been processed 660, no errors were committed in the process 670, and there are no more directories to replicate 680, the reproduction module 220 commits 690 the namespace replication. However, if there are more objects to be processed 660, the migration module 220 continues the process from selecting 630 objects. If there was an error in the directory or file creation 670, the reproduction module 220 deletes 675 the destination directory, and repeats the process from adding 620 mapping entries. Also, if there are more directories to process 680, the first file server 120 returns to selecting 510 primary directories.
The migration module 220 commits 690 the namespace as shown in FIG. 7.
If no error occurs during the data transfer 830, the destination file server 130 commits 840 the data migration as shown in
Note that in a file replication process, the file handle migration table can state be changed from “mirrored” to “replicated.” In the replicated state, the NAS switch 110 still serializes and mirrors the modifying requests to both the source and replica file servers 120, 130. If both copies are equally up-to-date, then the NAS switch 110 can issue the request based on the lowest load. In a snapshot process, the “mirror” state can be dropped to preserve current snapshot in the source server 120 while requests are forwarded to the replica server 130.
In one embodiment, the migration module 220 reconstructs the migration module 220 due to, for example, a device crash or data corruption. To do so, the migration module 220 walks through the namespace of the source file server 120. Since the stored file handles have a consistent size, the migration module 220 can quickly recognize stored file handles and retrieve pointer information. This association is added to entries in a reconstructed file handle migration table.
Referring again to
This application claims the benefit of U.S. Provisional Application No. 60/641,217, filed on Dec. 31, 2004, entitled “Methods and Apparatus for Directory and File Mirroring with Applications in Migration, Replication and Snapshot”; and claims priority as a continuation-in-part to both U.S. patent application Ser. No. 10/831,701, filed on Apr. 23, 2004 now U.S. Pat. No. 7,587,422, entitled “Transparent File Replication Using Namespace Replication,” by Thomas K. Wong et al., and to U.S. patent application Ser. No. 10/831,376, filed on Apr. 23, 2004 now U.S. Pat. No. 7,346,664, entitled “Transparent File Migration Using Namespace Replication,” each of which applications are herein incorporated by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
5689701 | Ault et al. | Nov 1997 | A |
5774715 | Madany et al. | Jun 1998 | A |
5890169 | Wong et al. | Mar 1999 | A |
5933825 | McClaughry et al. | Aug 1999 | A |
6101508 | Wolff | Aug 2000 | A |
6192408 | Vahalia et al. | Feb 2001 | B1 |
6314460 | Knight et al. | Nov 2001 | B1 |
6353837 | Blumenau | Mar 2002 | B1 |
6389427 | Faulkner | May 2002 | B1 |
6408298 | Van et al. | Jun 2002 | B1 |
6442548 | Balabine et al. | Aug 2002 | B1 |
6453354 | Jiang et al. | Sep 2002 | B1 |
6473401 | Kong et al. | Oct 2002 | B1 |
6606690 | Padovano | Aug 2003 | B2 |
6615365 | Jenevein et al. | Sep 2003 | B1 |
6633887 | Suzuki et al. | Oct 2003 | B2 |
6697846 | Soltis | Feb 2004 | B1 |
6711625 | Simpson | Mar 2004 | B1 |
6738883 | March et al. | May 2004 | B2 |
6931410 | Anderson et al. | Aug 2005 | B2 |
6938039 | Bober et al. | Aug 2005 | B1 |
6983379 | Spalink et al. | Jan 2006 | B1 |
6985956 | Luke et al. | Jan 2006 | B2 |
6996714 | Halasz et al. | Feb 2006 | B1 |
7054927 | Ulrich et al. | May 2006 | B2 |
7072917 | Wong et al. | Jul 2006 | B2 |
7089293 | Grosner et al. | Aug 2006 | B2 |
7096253 | Vinson et al. | Aug 2006 | B2 |
7120666 | McCanne et al. | Oct 2006 | B2 |
7127477 | Duncombe et al. | Oct 2006 | B2 |
7272613 | Sim et al. | Sep 2007 | B2 |
7272654 | Brendel | Sep 2007 | B1 |
7308709 | Brezak, Jr. et al. | Dec 2007 | B1 |
7313579 | Murotani | Dec 2007 | B2 |
7346664 | Wong et al. | Mar 2008 | B2 |
7441011 | Lin et al. | Oct 2008 | B2 |
7475142 | Sharma et al. | Jan 2009 | B2 |
20020013832 | Hubbard | Jan 2002 | A1 |
20020111929 | Pudipeddi et al. | Aug 2002 | A1 |
20020120763 | Miloushev et al. | Aug 2002 | A1 |
20020133491 | Sim et al. | Sep 2002 | A1 |
20020161855 | Manczak et al. | Oct 2002 | A1 |
20020199060 | Peters et al. | Dec 2002 | A1 |
20030037061 | Sastri et al. | Feb 2003 | A1 |
20030046270 | Leung et al. | Mar 2003 | A1 |
20030046335 | Doyle et al. | Mar 2003 | A1 |
20030056112 | Vinson et al. | Mar 2003 | A1 |
20030110263 | Shillo | Jun 2003 | A1 |
20030126247 | Strasser et al. | Jul 2003 | A1 |
20030140051 | Fujiwara et al. | Jul 2003 | A1 |
20030154236 | Dar et al. | Aug 2003 | A1 |
20030177178 | Jones et al. | Sep 2003 | A1 |
20030182313 | Federwisch et al. | Sep 2003 | A1 |
20030195903 | Manley et al. | Oct 2003 | A1 |
20030204613 | Hudson et al. | Oct 2003 | A1 |
20030204670 | Holt et al. | Oct 2003 | A1 |
20030220985 | Kawamoto et al. | Nov 2003 | A1 |
20040010714 | Stewart | Jan 2004 | A1 |
20040024963 | Talagala et al. | Feb 2004 | A1 |
20040054748 | Ackaouy et al. | Mar 2004 | A1 |
20040078465 | Coates et al. | Apr 2004 | A1 |
20040088297 | Coates et al. | May 2004 | A1 |
20040103104 | Hara et al. | May 2004 | A1 |
20040117438 | Considine et al. | Jun 2004 | A1 |
20040133606 | Miloushev et al. | Jul 2004 | A1 |
20040133652 | Miloushev et al. | Jul 2004 | A1 |
20040139167 | Edsall et al. | Jul 2004 | A1 |
20040153481 | Talluri | Aug 2004 | A1 |
20040267752 | Wong et al. | Dec 2004 | A1 |
20040267831 | Wong et al. | Dec 2004 | A1 |
20050033932 | Pudipeddi et al. | Feb 2005 | A1 |
20050055402 | Sato | Mar 2005 | A1 |
20050125503 | Iyengar | Jun 2005 | A1 |
20050188211 | Scott et al. | Aug 2005 | A1 |
20050198062 | Shapiro | Sep 2005 | A1 |
20050262102 | Anderson et al. | Nov 2005 | A1 |
20060080371 | Wong et al. | Apr 2006 | A1 |
20060161746 | Wong et al. | Jul 2006 | A1 |
20060271598 | Wong et al. | Nov 2006 | A1 |
20070024919 | Wong et al. | Feb 2007 | A1 |
20070136308 | Tsirigotis et al. | Jun 2007 | A1 |
20080114854 | Wong et al. | May 2008 | A1 |
Number | Date | Country |
---|---|---|
0 926 585 | Jun 1999 | EP |
1 209 556 | May 2002 | EP |
2004097686 | Apr 2004 | WO |
2004097571 | Nov 2004 | WO |
2004097572 | Nov 2004 | WO |
2004097624 | Nov 2004 | WO |
2005029251 | Mar 2005 | WO |
2006039689 | Apr 2006 | WO |
2007041456 | Oct 2006 | WO |
2007002855 | Jan 2007 | WO |
Number | Date | Country | |
---|---|---|---|
20060161746 A1 | Jul 2006 | US |
Number | Date | Country | |
---|---|---|---|
60641217 | Dec 2004 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10831701 | Apr 2004 | US |
Child | 11324845 | US | |
Parent | 10831376 | Apr 2004 | US |
Child | 10831701 | US |