Claims
- 1. A data interface comprising:
a network-side interface for communicating with a first bus; a storage-side interface for communicating with a second bus; a first data cache; a second data cache; a first parity engine configured to perform parity operations for data transactions between said network-side interface and said first data cache; a second parity engine configured to perform parity operations for data transactions between said storage-side interface and said first data cache; a third parity engine configured to perform parity operations for data transactions between said network-side interface and said second data cache; a fourth parity engine configured to perform parity operations for data transactions between said storage-side interface and said second data cache; and control logic configured to manage said data transactions between said network-side interface and said first data cache, data transactions between said network-side interface and said second data cache, data transactions between said storage-side interface and said first data cache, and data transactions between said storage-side interface and said second data cache.
- 2. The data engine of claim 1, wherein said first bus comprises a PCI bus.
- 3. The data engine of claim 1, wherein said second bus comprises a PCI bus.
- 4. The data engine of claim 1, wherein said first bus comprises a PCI bus and said second bus comprises a PCI bus.
- 5. The data engine of claim 1, wherein said control logic interprets various bits in a PCI address as opcode bits.
- 6. The data engine of claim 1, wherein said control logic interprets various bits in a PCI address as parity index bits.
- 7. The data engine of claim 1, wherein said control logic interprets various bits in a PCI address as block size bits.
- 8. The data engine of claim 1, wherein said control logic interprets a first set of bits in a PCI address as opcode bits and a second set of bits in said PCI address as cache address bits.
- 9. A data interface comprising:
a means for communicating with a first bus; a means for communicating with a second bus; a first data cache; a second data cache; a first parity engine configured to perform parity operations for data transactions between said first data bus and said first data cache; a second parity engine configured to perform parity operations for data transactions between said second data bus and said first data cache; a third parity engine configured to perform parity operations for data transactions between said first data bus and said second data cache; a fourth parity engine configured to perform parity operations for data transactions between said second data bus and said second data cache; and control logic configured to manage said data transactions between said first data bus and said first data cache, data transactions between said first data bus and said second data cache, data transactions between said second data bus and said first data cache, and data transactions between said second data bus and said second data cache.
REFERENCE TO RELATED APPLICATIONS
[0001] The present application claims priority benefit under 35 U.S.C. §119(e) from all of the following U.S. Provisional Applications, the contents of which are hereby incorporated by reference in their entirety:
[0002] U.S. Provisional Application No. 60/264671, filed Jan. 29, 2001, titled “DYNAMICALLY DISTRIBUTED FILE SYSTEM”;
[0003] U.S. Provisional Application No. 60/264694, filed Jan. 29, 2001, titled “A DATA PATH ACCELERATOR ASIC FOR HIGH PERFORMANCE STORAGE SYSTEMS”;
[0004] U.S. Provisional Application No. 60/264672, filed Jan. 29, 2001, titled “INTEGRATED FILE SYSTEM/PARITY DATA PROTECTION”;
[0005] U.S. Provisional Application No. 60/264673, filed Jan. 29, 2001, titled “DISTRIBUTED PARITY DATA PROTECTION”;
[0006] U.S. Provisional Application No. 60/264670, filed Jan. 29, 2001, titled “AUTOMATIC IDENTIFICATION AND UTILIZATION OF RESOURCES IN A DISTRIBUTED FILE SERVER”;
[0007] U.S. Provisional Application No. 60/264669, filed Jan. 29, 2001, titled “DATA FLOW CONTROLLER ARCHITECTURE FOR HIGH PERFORMANCE STORAGE SYSTEMS”;
[0008] U.S. Provisional Application No. 60/264668, filed Jan. 29, 2001, titled “ADAPTIVE LOAD BALANCING FOR A DISTRIBUTED FILE SERVER”; and
[0009] U.S. Provisional Application No. 60/302424, filed Jun. 29, 2001, titled “DYNAMICALLY DISTRIBUTED FILE SYSTEM”.
Provisional Applications (8)
|
Number |
Date |
Country |
|
60264671 |
Jan 2001 |
US |
|
60264694 |
Jan 2001 |
US |
|
60264672 |
Jan 2001 |
US |
|
60264673 |
Jan 2001 |
US |
|
60264670 |
Jan 2001 |
US |
|
60264669 |
Jan 2001 |
US |
|
60264668 |
Jan 2001 |
US |
|
60302424 |
Jun 2001 |
US |