Claims
- 1. A data path accelerator comprising:
a network interface for communicating with one or more clients; a storage interface for communicating with one or more disk drives; a metadata processor configured to queue network transaction requests to said network interface and storage transaction requests to said storage interface, said metadata processor further configured to manage file system metadata information, said file system metadata information comprising disk locations of one or more distributed parity groups on said one or more disk drives, each distributed parity group comprising one or more data blocks and a parity block, said file system metadata information further comprising information regarding a length of each distributed parity group; and a data engine configured to communicate with said storage interface to receive data from or write data to said one or more disk drives in satisfaction of said storage transaction requests, said data engine further configured to communicate with said network interface to receive data from or send data to said one or more clients in satisfaction of said network transaction requests, said data engine comprising at least one data cache and one or more parity engines to perform parity calculations for cached distributed parity groups.
- 2. The data path accelerator of claim 1, wherein said network interface comprises one or more Fibre Channel interfaces.
- 3. The data path accelerator of claim 1, wherein said storage interface comprises one or more Fibre Channel interfaces.
- 4. The data path accelerator of claim 1, wherein said storage interface comprises one or more SCSI interfaces.
- 5. The data path accelerator of claim 1, wherein said storage interface comprises one or more IDE interfaces.
- 6. The data path accelerator of claim 1, wherein at least one of said storage interface and said network interface comprises an InfiniBand interface.
- 7. The data path accelerator of claim 1, wherein said one or more parity engines compute exclusive- or parity.
- 8. The data path accelerator of claim 1, wherein at least one of said one or more parity engines is configured to regenerate lost data at least in part from parity data.
- 9. The data path accelerator of claim 1, wherein each of said storage transaction requests comprises an opcode and a parity index.
- 10. The data path accelerator of claim 1, wherein each of said network transaction requests comprises an opcode.
- 11. The data path accelerator of claim 1, wherein said network transaction requests and said storage transaction requests each comprise an opcode to specify at least one of a read from cache, a write to cache, an XOR write to cache, a write to a first cache with an XOR write to a second cache, and a write to said first cache and said second cache.
- 12. The data path accelerator of claim 1, wherein said metadata processor communicates with said storage interface using a PCI bus.
- 13. The data path accelerator of claim 12, wherein said metadata processor queues transactions to said storage interface.
- 14. The data path accelerator of claim 1, wherein said metadata processor communicates with said storage interface using a first memory-mapped bus and said metadata processor communicates with said network interface using a second memory-mapped bus.
- 15. The data path accelerator of claim 1, wherein said file system metadata information further comprises directory information that describes a directory structure of at least a portion of a network file system.
- 16. The data path accelerator of claim 1, wherein said file system metadata information further comprises a directory structure of a portion of a distributed file system that spans multiple file servers.
- 17. The data path accelerator of claim 23, wherein said file system metadata information further comprises server identifiers that identify respective servers for accessing files catalogued in a directory structure that spans a plurality of servers.
- 18. The data path accelerator of claim 1, wherein a size of a first distributed parity group is independent of a size of a second distributed parity group.
- 19. A method of providing file services, comprising:
receiving a file request from a client, said request received by a first processing module; accessing metadata to locate file data corresponding to said file request, said metadata stored in a metadata cache provided to said first processing module; queuing one or more storage transaction requests to a storage interface, said storage transaction requests queued by said first processing module; caching a distributed parity group as a cached distributed parity group, said cached distributed parity group retrieved as a result of said storage transaction requests in a data cache operably connected to a data engine, said data engine operating asynchronously with respect to said first processing module; using a parity engine in said data engine to compute parity for said cached distributed parity group; queuing one or more network transaction requests to a network interface, said network transaction requests queued by said first processing module upon completion of said at least one storage transaction request; and sending at least a portion of said cached distributed parity group to said client according to said network transaction request, wherein said sending operation is performed asynchronously, with respect to said first processing module, by said data engine and said network interface.
- 20. The method of claim 19, wherein said network interface comprises one or more Fibre Channel interfaces.
- 21. The method of claim 19, wherein said storage interface comprises one or more Fibre Channel interfaces.
- 22. The method of claim 19, wherein said storage interface comprises one or more SCSI interfaces.
- 23. The method of claim 19, wherein said storage interface comprises one or more IDE interfaces.
- 24. The method of claim 19, wherein at least one of said storage interface and said network interface comprises an InfiniBand interface.
- 25. The method of claim 19, wherein said parity engine re-generates corrupted data in said cached distributed parity group by using uncorrupted portions of said distributed parity group.
- 26. The method of claim 19, wherein said storage transaction request specifies at least one of a read from cache, a write to cache, an XOR write to cache, a write to a first cache with an XOR write to a second cache, and a write to said first cache and said second cache.
- 27. The method of claim 19, wherein said first processing module communicates with said storage interface using a memory-mapped bus.
- 28. The method of claim 19, wherein said first processing module communicates with said storage interface using a first memory-mapped bus and said first processing module communicates with said network interface using a second memory-mapped bus.
- 29. The method of claim 19, further comprising modifying metadata stored in said metadata cache, said metadata comprising directory information that describes a directory structure of at least a portion of a network file system.
- 30. The method of claim 19, further comprising modifying metadata stored in said metadata cache, said metadata comprising information for locating data stored in files one or more storage devices.
- 31. The method of claim 30, wherein said information for locating data comprises one or more server identifiers.
- 32. The method of claim 19, wherein said metadata identifies disk addresses of blocks in said cached distributed parity group.
- 33. The method of claim 19, wherein said metadata identifies said cached distributed parity group, said cached distributed parity group comprising a plurality of information blocks, said information blocks comprising one or more data blocks, said information blocks further comprising a parity block, each of said information blocks stored on a different disk drive, said first processing unit assigning disk locations for said information blocks, said data engine accessing said data blocks and generating said parity block.
- 34. The method of claim 19, wherein a size of said cached distributed parity group from a first file is independent of a size of a second distributed parity group in said first file.
- 35. An apparatus, comprising:
means for receiving a file request from a client, accessing metadata to locate disk addresses of data blocks of a distributed parity group and a parity block of said distributed parity group corresponding to said file request, and queuing one or more storage transaction requests, each storage transaction request providing information regarding said disk addresses and cache addresses; means for caching said distributed parity group as a cached distributed parity group in response to said storage transaction requests; and means for computing parity for said cached distributed parity group.
REFERENCE TO RELATED APPLICATIONS
[0001] The present application claims priority benefit under 35 U.S.C. § 119(e) from all of the following U.S. Provisional Applications, the contents of which are hereby incorporated by reference in their entirety:
[0002] U.S. Provisional Application No. 60/264,671, filed Jan. 29, 2001, titled “DYNAMICALLY DISTRIBUTED FILE SYSTEM”;
[0003] U.S. Provisional Application No. 60/264,694, filed Jan. 29, 2001, titled “A DATA PATH ACCELERATOR ASIC FOR HIGH PERFORMANCE STORAGE SYSTEMS”;
[0004] U.S. Provisional Application No. 60/264,672, filed Jan. 29, 2001, titled “INTEGRATED FILE SYSTEM/PARITY DATA PROTECTION”;
[0005] U.S. Provisional Application No. 60/264,673, filed Jan. 29, 2001, titled “DISTRIBUTED PARITY DATA PROTECTION”;
[0006] U.S. Provisional Application No. 60/264,670, filed Jan. 29, 2001, titled “AUTOMATIC IDENTIFICATION AND UTILIZATION OF RESOURCES IN A DISTRIBUTED FILE SERVER”;
[0007] U.S. Provisional Application No. 60/264,669, filed Jan. 29, 2001, titled “DATA FLOW CONTROLLER ARCHITECTURE FOR HIGH PERFORMANCE STORAGE SYSTEMS”;
[0008] U.S. Provisional Application No. 60/264,668, filed Jan. 29, 2001, titled “ADAPTIVE LOAD BALANCING FOR A DISTRIBUTED FILE SERVER”; and
[0009] U.S. Provisional Application No. 60/302,424, filed Jun. 29, 2001, titled “DYNAMICALLY DISTRIBUTED FILE SYSTEM”.
Provisional Applications (8)
|
Number |
Date |
Country |
|
60264671 |
Jan 2001 |
US |
|
60264694 |
Jan 2001 |
US |
|
60264672 |
Jan 2001 |
US |
|
60264673 |
Jan 2001 |
US |
|
60264670 |
Jan 2001 |
US |
|
60264669 |
Jan 2001 |
US |
|
60264668 |
Jan 2001 |
US |
|
60302424 |
Jun 2001 |
US |