Claims
- 1. A file server comprising:
a network interface for communicating with one or more clients; a storage interface for communicating with one or more disk drives; a data engine configured to communicate with said storage interface to receive file data from said one or more disk drives, said data engine further configured to communicate with said network interface to send file data to said one or more clients; and a CPU configured to queue transaction requests for said data engine in response to file requests from said one or more clients, said data engine configured to receive file data in response to at least a portion of said transaction requests, said data engine configured to send file data to said one or more clients in response to at least a portion of said transaction requests.
- 2. The file server of claim 1, wherein said network interface comprises one or more Fibre Channel interfaces.
- 3. The file server of claim 1, wherein said storage interface comprises one or more Fibre Channel interfaces.
- 4. The file server of claim 1, wherein said storage interface comprises one or more SCSI interfaces.
- 5. The file server of claim 1, wherein said storage interface comprises one or more IDE interfaces.
- 6. The file server of claim 1, wherein at least one of said storage interface and said network interface comprises an InfiniBand interface.
- 7. The file server of claim 1, further comprising one or more data caches operably connected to said data engine.
- 8. The file server of claim 1, said data engine further configured to generate parity.
- 9. The file server of claim 1, wherein said data engine is further configured to generate exclusive-or parity.
- 10. The file server of claim 1, wherein said data engine is further configured to compute exclusive-or parity for distributed parity groups.
- 11. The file server of claim 1, wherein said data engine is further configured to regenerate lost data at least in part from parity data.
- 12. The file server of claim 1, wherein said transaction requests comprise storage transaction requests and network transaction requests, said storage transaction requests queued to said storage interface and said network transaction requests queued to said network interface.
- 13. The file server of claim 1, wherein each of said transaction requests comprises an opcode.
- 14. The file server of claim 13, wherein said opcode comprises a code to specify at least one of a read from cache, a write to cache, an XOR write to cache, a write to a first cache with an XOR write to a second cache, and a write to said first cache and said second cache.
- 15. The file server of claim 1, wherein said CPU communicates with said storage interface using a PCI bus.
- 16. The file server of claim 15, wherein said CPU queues transactions to said storage interface.
- 17. The file server of claim 1, wherein said CPU communicates with said storage interface using a first PCI bus and said CPU communicates with said network interface using a second PCI bus.
- 18. The file server of claim 17, wherein said CPU queues network transactions to said network interface, said network transactions comprising data flow between at least one of said clients and at least one data cache operably connected to said data engine.
- 19. The file server of claim 17, wherein said CPU queues storage transactions to said storage interface, said storage transactions comprising data flow between at least one of said disk drives and at least one data cache operably connected to said data engine.
- 20. The file server of claim 19, wherein said storage transaction further comprises at least one parity operation.
- 21. The file server of claim 1, further comprising a metadata cache operably connected to said CPU.
- 22. The file server of claim 21, wherein said CPU manages metadata stored in said metadata cache, said metadata comprising directory information that describes a directory structure of at least a portion of a network file system.
- 23. The file server of claim 21, wherein said CPU manages metadata, said metadata configured to describe a directory structure of a portion of a distributed file system, said metadata comprising location information for files catalogued in said directory structure.
- 24. The file server of claim 23, wherein said location information comprises server identifiers that identify respective servers for accessing files catalogued in said directory structure.
- 25. The file server of claim 1, further comprising a metadata cache, said CPU managing metadata stored in said metadata cache, wherein said metadata identifies data blocks stored on one or more of said disk drives and corresponding parity blocks stored on one or more of said disk drives.
- 26. The file server of claim 25, wherein said metadata identifies parity groups, said parity groups comprising a plurality of information blocks, said information blocks comprising one or more data blocks, said information blocks further comprising a parity block, each of said information blocks stored on a different disk drive.
- 27. The file server of claim 26, wherein a size of a first parity group is independent of a size of a second parity group.
- 28. A method of providing file services, comprising:
receiving a file request from a client, said request received by a first processing module; accessing metadata to locate file data corresponding to said file request, said metadata stored in a metadata cache provided to said first processing module; queuing at least one storage transaction requests to a storage interface, said storage transaction request queued by said first processing module; storing disk data retrieved as a result of said storage transaction request in a data cache operably connected to a data engine, said data engine operating asynchronously with respect to said first processing module; queuing one or more network transaction requests to a network interface, said network transaction request queued by said first processing module upon completion of said at least one storage transaction request; and sending file data from said data cache to said client according to said network transaction request, wherein said sending operation is performed asynchronously, with respect to said first processing module, by said data engine and said network interface.
- 29. The method of claim 28, wherein said network interface comprises one or more Fibre Channel interfaces.
- 30. The method of claim 28, wherein said storage interface comprises one or more Fibre Channel interfaces.
- 31. The method of claim 28, wherein said storage interface comprises one or more SCSI interfaces.
- 32. The method of claim 28, wherein said storage interface comprises one or more IDE interfaces.
- 33. The method of claim 28, wherein at least one of said storage interface and said network interface comprises an InfiniBand interface.
- 34. The method of claim 28, wherein said data engine computes parity.
- 35. The method of claim 28, further comprising re-generating lost data using parity processing in said data engine.
- 36. The method of claim 35, wherein said parity processing comprises exclusive-or parity processing.
- 37. The method of claim 28, wherein said storage transaction request comprises an opcode.
- 38. The method of claim 37, wherein said opcode comprises a code to specify at least one of a read from cache, a write to cache, an XOR write to cache, a write to a first cache with an XOR write to a second cache, and a write to said first cache and said second cache.
- 39. The method of claim 28, wherein said first processing module communicates with said storage interface using a PCI bus.
- 40. The method of claim 28, wherein said first processing module communicates with said storage interface using a first PCI bus and said first processing module communicates with said network interface using a second PCI bus.
- 41. The method of claim 28, wherein a storage transaction comprises at least one parity operation.
- 42. The method of claim 28, further comprising modifying metadata stored in said metadata cache, said metadata comprising directory information that describes a directory structure of at least a portion of a network file system.
- 43. The method of claim 28, further comprising modifying metadata stored in said metadata cache, said metadata comprising information for locating data stored in files one or more storage devices.
- 44. The method of claim 43, wherein said information for locating data comprises one or more server identifiers.
- 45. The method of claim 28, wherein said metadata identifies parity groups, said parity groups comprising data blocks stored on one or more disk drives and corresponding parity blocks stored on another disk drive, said metadata comprising information regarding a disk location for each of said data blocks and for said parity block.
- 46. The method of claim 28, wherein said metadata identifies parity groups, said parity groups comprising a plurality of information blocks, said information blocks comprising one or more data blocks, said information blocks further comprising a parity block, each of said information blocks stored on a different disk drive, said first processing unit assigning disk locations for said information blocks, said data engine accessing said data blocks and generating said parity block.
- 47. The method of claim 46, wherein a size of a first parity group is independent of a size of a second parity group.
- 48. An apparatus comprising:
a network interface for communicating with one or more clients; a storage interface for communicating with one or more disk drives; means for receiving file requests from said clients, managing file system metadata, queuing network transaction requests to said network interface, and queuing storage transaction requests to said storage interface; and means for receiving received data from said storage interface and storing at least a portion of said received data in a data cache according to a first address word containing a first opcode provided from at least one of said queued storage transaction requests, and for sending file data from said data cache to said network interface according to a second address word containing a second opcode provided from at least one of said queued network transaction requests.
- 49. The apparatus of claim 48, said means for receiving received data further computing parity data from said received data.
- 50. A method of providing file services, comprising:
receiving a write request from a client, said write request received by a first processing module; accessing metadata to locate a file corresponding to said write request, said metadata stored in a metadata cache provided to said first processing module; queuing a network transaction request to a network interface, said network transaction request queued by said first processing module to retrieve, from said client, data to be written to said file; storing disk data retrieved as a result of said network transaction request in a data cache operably connected to a data engine, said data engine operating asynchronously with respect to said first processing module; queuing at least one storage transaction request to a storage interface, said storage transaction request queued by said first processing module upon completion of said network transaction request; and sending file data from said data cache to said storage interface according to said storage transaction request, wherein said sending operation is performed by said data engine and said network interface asynchronously with respect to said first processing module.
- 51. An apparatus comprising:
a network interface for communicating with one or more clients; a storage interface for communicating with one or more disk drives; means for receiving write requests from said clients, managing file system metadata, queuing network transaction requests to said network interface, and queuing storage transaction requests to said storage interface; and means for receiving write data from said network interface and storing at least a portion of said write data in a data cache according to a first address word containing a first opcode provided from at least one of said queued network transaction requests, and for sending file data from said data cache to said storage interface according to a second address word containing a second opcode provided from at least one of said queued storage transaction requests.
- 52. The apparatus of claim 51, said means for receiving write data computing parity data from said write data.
REFERENCE TO RELATED APPLICATIONS
[0001] The present application claims priority benefit under 35 U.S.C. §119(e) from all of the following U.S. Provisional Applications, the contents of which are hereby incorporated by reference in their entirety:
[0002] U.S. Provisional Application No. 60/264671, filed Jan. 29, 2001, titled “DYNAMICALLY DISTRIBUTED FILE SYSTEM”;
[0003] U.S. Provisional Application No. 60/264694, filed Jan. 29, 2001, titled “A DATA PATH ACCELERATOR ASIC FOR HIGH PERFORMANCE STORAGE SYSTEMS”;
[0004] U.S. Provisional Application No. 60/264672, filed Jan. 29, 2001, titled “INTEGRATED FILE SYSTEM/PARITY DATA PROTECTION”;
[0005] U.S. Provisional Application No. 60/264673, filed Jan. 29, 2001, titled “DISTRIBUTED PARITY DATA PROTECTION”;
[0006] U.S. Provisional Application No. 60/264670, filed Jan. 29, 2001, titled “AUTOMATIC IDENTIFICATION AND UTILIZATION OF RESOURCES IN A DISTRIBUTED FILE SERVER”;
[0007] U.S. Provisional Application No. 60/264669, filed Jan. 29, 2001, titled “DATA FLOW CONTROLLER ARCHITECTURE FOR HIGH PERFORMANCE STORAGE SYSTEMS”;
[0008] U.S. Provisional Application No. 60/264668, filed Jan. 29, 2001, titled “ADAPTIVE LOAD BALANCING FOR A DISTRIBUTED FILE SERVER”; and
[0009] U.S. Provisional Application No. 60/302424, filed Jun. 29, 2001, titled “DYNAMICALLY DISTRIBUTED FILE SYSTEM”.
Provisional Applications (8)
|
Number |
Date |
Country |
|
60264671 |
Jan 2001 |
US |
|
60264694 |
Jan 2001 |
US |
|
60264672 |
Jan 2001 |
US |
|
60264673 |
Jan 2001 |
US |
|
60264670 |
Jan 2001 |
US |
|
60264669 |
Jan 2001 |
US |
|
60264668 |
Jan 2001 |
US |
|
60302424 |
Jun 2001 |
US |