Claims
- 1. A file server comprising:
a network interface for communicating with one or more clients, said network interface comprising a network transaction queue; a storage interface for communicating with one or more disk drives, said storage interface comprising a storage transaction queue; a metadata processor configured to communicate with said network interface across a first memory-mapped bus and configured to communicate with said storage interface across a second memory-mapped bus, said metadata processor configured to queue network transaction requests to said network interface in response to file access requests from said clients, said metadata processor configured to queue storage transaction requests in response to file access requests from said clients, said network transaction requests and said storage transaction requests comprising address information and opcode information; and a data engine configured to communicate with said network interface across said first memory-mapped bus, said data engine configured to communicate with said storage interface across said second memory-mapped bus, said data engine configured to receive first address words and third address words from said network interface and to receive second address words and fourth address words from said storage interface, said first address words comprising first address bits and first opcode bits, said second address words comprising second address bits and second opcode bits, said third address words comprising third address bits and third opcode bits, said fourth address words comprising fourth address bits and fourth opcode bits, said data engine receiving first data from said first memory-mapped bus and storing said first data in a data cache according to said first address bits and said first opcode bits, said data engine receiving second data from said second memory-mapped bus and storing said second data in said data cache according to second address bits and said second opcode bits, said data engine providing third data to said first memory-mapped bus from said data cache according to said third address bits and said third opcode bits, said data engine providing fourth data to said second memory-mapped bus from said data cache according to said fourth address bits and said fourth first opcode bits
- 2. The file server of claim 1, wherein said network interface comprises a Fibre Channel interface.
- 3. The file server of claim 1, wherein said storage interface comprises a Fibre Channel interface.
- 4. The file server of claim 1, wherein said storage interface comprises a SCSI interface.
- 5. The file server of claim 1, wherein said storage interface comprises an IDE interface.
- 6. The file server of claim 1, wherein at least one of said storage interface and said network interface comprises an InfiniBand interface.
- 7. The file server of claim 1, said data engine further configured to generate a parity block from one or more data blocks.
- 8. The file server of claim 1, wherein said data engine is further configured to generate an exclusive-or parity block from one or more data blocks.
- 9. The file server of claim 1, wherein said data engine is further configured to regenerate lost data at least in part from parity data.
- 10. The file server of claim 1, wherein said first opcode bits comprise a code to specify at least one of a read from cache, a write to cache, an XOR write to cache, a write to a first cache with an XOR write to a second cache, and a write to said first cache and said second cache.
- 11. The file server of claim 1, wherein said first memory-mapped bus comprises a PCI bus.
- 12. The file server of claim 1, wherein said second memory-mapped bus comprises a PCI bus.
- 13. The file server of claim 1, further comprising a metadata cache operably connected to said metadata processor.
- 14. The file server of claim 1, wherein said metadata processor manages file system metadata, said file system metadata comprising directory information that describes a directory structure of at least a portion of a network file system.
- 15. The file server of claim 1, wherein said metadata processor manages file system metadata, said file system metadata configured to describe a directory structure of a portion of a distributed file system that aggregates files across a plurality of servers, said metadata comprising location information for files catalogued in said directory structure, said location information comprising server identifiers, disk identifiers, and logical block identifiers.
- 16. The file server of claim 1, wherein said metadata processor manages file system metadata, wherein said file system metadata identifies data blocks stored on one or more disk drives and corresponding parity blocks stored on one or more of said disk drives.
- 17. The file server of claim 1, wherein said metadata processor manages file system metadata, and wherein said file system metadata identifies parity groups, said parity groups comprising a plurality of information blocks, said information blocks comprising one or more data blocks, said information blocks further comprising a parity block, each of said information blocks stored on a different disk drive, said file system metadata identifying a disk drive and a logical block location of each information block.
- 18. The file server of claim 17, wherein a number of information blocks in a first parity group in a file is independent of a number of information blocks of a second parity group in said file.
- 19. A method of providing file services, comprising:
receiving a file request from a client, said request received by a first processing module; accessing metadata to locate file data corresponding to said file request, said metadata stored in a metadata cache provided to said first processing module; queuing at storage transaction requests to a storage interface, said storage transaction request queued by said first processing module, said storage transaction request comprising address information and command information; storing disk data retrieved as a result of said storage transaction request in a data cache operably connected to a data engine, said data engine operating asynchronously with respect to said first processing module, disk data stored in said data cache according to address information and command information provided in said storage transaction request; notifying said first processing module of a completion of said storage transaction request; queuing a network transaction request to a network interface, said network transaction request queued by said first processing module upon completion of said storage transaction request, said network transaction request comprising address information and command information; and sending file data from said data cache to said client according to said network transaction request according to address information and command information in said transaction request.
- 20. The method of claim 19, wherein said network interface comprises at least one Fibre Channel interface.
- 21. The method of claim 19, wherein said storage interface comprises at least one Fibre Channel interfaces.
- 22. The method of claim 19, wherein said storage interface comprises one or more SCSI interfaces.
- 23. The method of claim 19, wherein said storage interface comprises one or more IDE interfaces.
- 24. The method of claim 19, wherein at least one of said storage interface and said network interface comprises an InfiniBand interface.
- 25. The method of claim 19, further comprising computing a parity block for one or more data blocks stored in said data cache.
- 26. The method of claim 19, further comprising said data engine re-generating a lost data block using data from one or more parity blocks and one or more data blocks in a parity group corresponding to said lost data block.
- 27. The method of claim 19, wherein said storage transaction request further comprises a parity index.
- 28. The method of claim 19, wherein an opcode comprises bits to specify at least one of a read from cache, a write to cache, an XOR write to cache, a write to a first cache with an XOR write to a second cache, and a write to said first cache and said second cache.
- 29. The method of claim 28, wherein said opcode is a portion of a PCI address.
- 30. The method of claim 19, wherein said storage transaction request is coded into a PCI address.
- 31. The method of claim 19, wherein said storage transaction comprises at least one parity operation performed by said data engine.
- 32. The method of claim 19, further comprising modifying metadata stored in said metadata cache, said metadata comprising directory information that describes a directory structure of at least a portion of a network file system.
- 33. The method of claim 19, further comprising modifying metadata stored in said metadata cache, said metadata comprising information for locating data stored in files one or more storage devices.
- 34. The method of claim 33, wherein said information for locating data comprises one or more server identifiers.
- 35. The method of claim 19, wherein said metadata identifies parity groups, said parity groups comprising data blocks stored on one or more disk drives and corresponding parity blocks stored on another disk drive, said metadata comprising information regarding a disk location for each of said data blocks and for said parity block.
- 36. The method of claim 19, wherein said metadata identifies parity groups, said parity groups comprising a plurality of information blocks, said information blocks comprising one or more data blocks, said information blocks further comprising a parity block, each of said information blocks stored on a different disk drive, said first processing unit assigning disk locations for said information blocks, said data engine accessing said data blocks and generating said parity block.
- 37. The computer network file system of claim 36, wherein a size of a first parity group is independent of a size of a second parity group.
- 38. An apparatus comprising:
a network interface for communicating with one or more clients; a storage interface for communicating with one or more disk drives; a processor for receiving file requests from said clients, managing file system metadata, queuing network transaction requests to said network interface, and queuing storage transaction requests to said storage interface; a data engine comprising a network-side interface and a storage-side interface, said data engine configured to accept received data from said storage interface and store at least a portion of said received data in at least one data cache according to a first address word containing a first opcode, said data engine configured to send file data from said at least one data cache to said network interface according to a second address word containing a second opcode; means for communicating between said network interface, said processor, and said network-side interface; and means for communicating between said storage interface, said processor, and said storage-side interface.
- 39. The apparatus of claim 38, said data engine comprising means for computing parity data for a distributed parity group.
- 40. The apparatus of claim 38, said data engine comprising means for recovering lost data in a distributed parity group.
REFERENCE TO RELATED APPLICATIONS
[0001] The present application claims priority benefit under 35 U.S.C. §119(e) from all of the following U.S. Provisional Applications, the contents of which are hereby incorporated by reference in their entirety:
[0002] U.S. Provisional Application No. 60/264671, filed Jan. 29, 2001, titled “DYNAMICALLY DISTRIBUTED FILE SYSTEM”;
[0003] U.S. Provisional Application No. 60/264694, filed Jan. 29, 2001, titled “A DATA PATH ACCELERATOR ASIC FOR HIGH PERFORMANCE STORAGE SYSTEMS”;
[0004] U.S. Provisional Application No. 60/264672, filed Jan. 29, 2001, titled “INTEGRATED FILE SYSTEM/PARITY DATA PROTECTION”;
[0005] U.S. Provisional Application No. 60/264673, filed Jan. 29, 2001, titled “DISTRIBUTED PARITY DATA PROTECTION”;
[0006] U.S. Provisional Application No. 60/264670, filed Jan. 29, 2001, titled “AUTOMATIC IDENTIFICATION AND UTILIZATION OF RESOURCES IN A DISTRIBUTED FILE SERVER”;
[0007] U.S. Provisional Application No. 60/264669, filed Jan. 29, 2001, titled “DATA FLOW CONTROLLER ARCHITECTURE FOR HIGH PERFORMANCE STORAGE SYSTEMS”;
[0008] U.S. Provisional Application No. 60/264668, filed Jan. 29, 2001, titled “ADAPTIVE LOAD BALANCING FOR A DISTRIBUTED FILE SERVER”; and
[0009] U.S. Provisional Application No. 60/302424, filed Jun. 29, 2001, titled “DYNAMICALLY DISTRIBUTED FILE SYSTEM”.
Provisional Applications (8)
|
Number |
Date |
Country |
|
60264671 |
Jan 2001 |
US |
|
60264694 |
Jan 2001 |
US |
|
60264672 |
Jan 2001 |
US |
|
60264673 |
Jan 2001 |
US |
|
60264670 |
Jan 2001 |
US |
|
60264669 |
Jan 2001 |
US |
|
60264668 |
Jan 2001 |
US |
|
60302424 |
Jun 2001 |
US |