Claims
- 1. A processor, coupled to a host, comprising:
a first interface configured to receive a network message sent to said host, wherein said network message has already been processed in OSI levels 1-4; an engine configured to perform at least some processing of said network message above OSI level four; and a second interface configured to provide results of said processing to said host.
- 2. The processor of claim 1 further comprising:
a third interface configured to provide said results to a remote processor other than said host.
- 3. The processor of claim 1 wherein said processor is a pre-processor, said first interface is an interface to a TCP-IP offload engine, and said second interface is an interface to said host.
- 4. The processor of claim 1 wherein said processor is a co-processor and said first and second interfaces are part of a host interface.
- 5. The processor of claim 1 further comprising:
a cache memory interface; and wherein said processor is configured to access meta data in a cache memory.
- 6. The processor of claim 5 wherein said processor is further configured to access data in said cache memory.
- 7. The processor of claim 1 wherein said processor is further configured to look-up meta data, and pass said meta data to said host.
- 8. The processor of claim 1 wherein said processor is further configured to parse a header in said message.
- 9. The processor of claim 1 further comprising an interface to a co-processor.
- 10. The processor of claim 9 wherein said co-processor is a security processor.
- 11. The processor of claim 1 wherein said engine is configured to completely process and return certain messages without forwarding said certain messages to said host.
- 12. The processor of claim 1 wherein said engine is configured to communicate with said host by writing to and reading from a commonly accessible control and data buffer.
- 13. A processor, coupled to a host, comprising:
a first interface configured to receive a network messages sent to said host; an engine configured to perform all processing of certain of said network messages above OSI level four; and wherein said engine is further configured to completely process and return said certain of said network messages without forwarding said certain messages to said host.
- 14. The processor of claim 13 wherein at least one of said certain of said network messages involve accessing of meta data., but not data pointed to by said meta data.
- 15. The processor of claim 13 wherein said processor is a pre-processor, said first interface is an interface to a TCP-IP offload engine, and further comprising a second interface to said host.
- 16. The processor of claim 13 wherein said processor is a co-processor and said interface is part of a host interface.
- 17. The processor of claim 13 further comprising:
a cache memory interface; and wherein said processor is configured to access meta data in a cache memory.
- 18. The processor of claim 17 wherein said processor is further configured to access data in said cache memory.
- 19. The processor of claim 13 wherein said processor is configured to look-up meta data, and return said meta data to an originator of said network message.
- 20. The processor of claim 13 wherein said processor is further configured to parse a header in said message.
- 21. The processor of claim 13 further comprising an interface to a co-processor.
- 22. The processor of claim 21 wherein said co-processor is a security processor.
- 23. A pre-processor, coupled to a host, comprising:
a TCP-IP offload engine (TOE) interface configured to receive a network message sent to said host; an engine configured to perform at least some processing of said network message above OSI level four, including accessing meta data in a cache memory and parsing a header in said message; a host interface configured to provide results of said processing to said host; and a cache memory interface.
- 24 The pre-processor of claim 10 wherein said engine is configured to pass certain messages between said TOE and said host without modification so that said pre-processor is transparent to said TOE.
- 25. A host comprising:
a network interface; a processor configured to receive messages from a network including a processed header above OSI level four and meta data looked up by an external processor; said processor being further configured to respond to said message using said processed header and said looked up meta data.
- 26. The host of claim 25 further comprising:
a first driver configured to communicate with said external processor; and a second driver configured to communicate with a TCP/IP offload engine (TOE).
- 27. The host of claim 26 wherein said second driver is configured to communicate with said TOE through said external processor.
- 28. A method for processing, in an engine offloaded from a host, a network message sent to said host, comprising:
examining said network message to determine if it relates to a data access; if said network message does not relate to a data access, passing said network message through to said host; if said network message does relate to a data access, processing at least a portion of said message above OSI level four in said engine.
- 29. The method of claim 28 further comprising:
passing network messages not relating to a data access and partially processed network messages through to said host using DMA and an interrupt.
- 30. The method of claim 29 wherein said network message is received by said engine from a TOE, and further comprising:
generating, in said engine, an acknowledgment message for said TOE.
- 31. The method of claim 28 wherein said processing comprises one of
preprocessing, coprocessing, or completely processing said network message.
- 32. The method of claim 28 wherein said processing comprises looking up meta data in a cache memory.
- 33. The method of claim 28 wherein said processing comprises parsing a header in said message.
- 34. The method of claim 28 wherein said engine includes a firmware layer and an application layer, the operation of said firmware layer comprising:
allocating network receive buffers; allocating PE acknowledge buffers; allocating NAS transmit buffers; and allocating socket transmit buffers.
- 35. The method of claim 28 wherein said engine includes a firmware layer and an application layer, the operation of said application layer comprising:
allocating socket receive buffers; allocating HOST acknowledge buffers; allocating HOST receive buffers; allocating NAS response buffers; allocating SRAM receive buffers; allocating SRAM transmit buffers; and providing FIFO pointers associated with said buffers.
- 36. A method for processing, in an engine offloaded from a host, a network message response sent from said host to a network, comprising:
examining said network message response to determine if it relates to a data access; if said network message response does not relate to a data access, passing said network message response through to said network; if said network message response does relate to a data access, examining said response to determine if post-processing of said response is needed; if post-processing is needed, post-processing at least a portion of said message above OSI level four in said engine, then passing said network message response through to said network.
CROSS-REFERENCES TO RELATED APPLICATIONS
[0001] This application is a continuation-in-part of and claims priority from U.S. application Ser. No. 10/248,029, filed Dec. 12, 2002, and also claims priority from Provisional Application Nos. 60/437,809 and 60/437,944, both filed on Jan. 2, 2003, all of which are incorporated herein by reference.
Provisional Applications (2)
|
Number |
Date |
Country |
|
60437809 |
Jan 2003 |
US |
|
60437944 |
Jan 2003 |
US |
Continuation in Parts (1)
|
Number |
Date |
Country |
Parent |
10248029 |
Dec 2002 |
US |
Child |
10352800 |
Jan 2003 |
US |