Claims
- 1. An apparatus for optimizing path compression of at least one single entry trie table in a data structure in a pipelined hardware bitmapped multi-bit trie algorithmic network search engine wherein said data structure comprises at least one parent trie table entry for said at least one single entry trie table, said apparatus comprising:
at least one PC logic unit coupled to at least one pipeline logic stage of said network search engine, wherein said at least one PC logic unit is capable of encoding in said at least one parent trie table entry a path compression pattern that represents common prefix bits of a data packet; wherein said at least one PC logic unit is further capable of encoding in said at least one parent trie table entry a skip count that represents a length of said path compression pattern; wherein said at least one PC logic unit is further capable of eliminating said at least one single entry trie table from said data structure by utilizing said path compression pattern and said skip count.
- 2. The apparatus as claimed in claim 1 wherein said at least one parent trie table entry in said data structure comprises:
a first trie table entry that comprises a pattern data field, a pointer data field and a code data field for a path compression pattern having a skip count of n strides; and wherein said path compression pattern is embedded in said pattern data field of said first trie table entry.
- 3. The apparatus as claimed in claim 2 wherein said at least one parent trie table entry in said data structure comprises:
a skip count of n strides; and wherein said skip count of n strides is embedded in said pattern data field of said first trie table entry.
- 4. The apparatus as claimed in claim 2 wherein said at least one parent trie table entry in said data structure comprises:
a second trie table entry that comprises a pattern data field, a count data field, a pointer data field and a code data field for a path compression pattern having a skip count from one stride to (n−1) strides; and wherein said path compression pattern is embedded in said pattern data field of said second trie table entry.
- 5. The apparatus as claimed in claim 4 wherein said at least one parent trie table entry in said data structure comprises:
a skip count from one stride to (n−1) strides; and wherein said skip count from one stride to (n−1) strides is embedded in said count data field of said second trie table entry.
- 6. An apparatus for optimizing path compression of at least one single entry trie table in a data structure in a pipeline hardware assisted bitmapped multi-trie algorithmic network search engine wherein said data structure comprises at least one parent trie table entry for said at least one single entry trie table, said at least one parent trie table entry comprising a path compression pattern that represents common prefix bits of a data packet and a skip count that represents a length of said path compression pattern, said apparatus comprising:
at least one PC logic unit coupled to at least one pipeline logic stage of said network search engine that is capable of detecting said path compression optimization; and wherein said at least one PC logic unit, in response to detecting said path compression optimization, is further capable of (1) suppressing a memory read operation to a memory bank associated with said at least one PC logic unit, and (2) updating a value of said skip count, and (3) sending an unprocessed portion of said path compression pattern and an updated value of said skip count to a next stage of said pipeline network search engine.
- 7. The apparatus as claimed in claim 6 wherein:
at least one PC logic unit coupled to at least one pipeline logic stage of said network search engine is capable of determining that a value of said skip count equals zero; wherein said at least one PC logic unit, in response to determining that a value of said skip count equals zero, is further capable of (1) reading memory data from a memory bank associated with said at least one PC logic unit for which said skip count equals zero, and (2) providing said memory data to a next stage of said pipeline network search engine.
- 8. The apparatus as claimed in claim 7 wherein said apparatus is capable of placing a trie table that represents a result of said path compression process into a memory bank into which an original trie table was to be placed before said path compression process was performed.
- 9. The apparatus as claimed in claim 1 wherein during said path compression process said apparatus is capable of minimizing utilization of one of: memory space, memory bandwidth and power consumption.
- 10. The apparatus as claimed in claim 6 wherein said at least one PC logic unit, in response to detecting said path compression optimization, is further capable of skipping said at least one stage of said pipeline network search engine associated with said at least one single entry trie table.
- 11. The apparatus as claimed in claim 6 comprising a plurality of PC logic units within said pipeline hardware assisted bitmapped multi-trie algorithmic network search engine, wherein said plurality of PC logic units are capable of processing said unprocessed portion of said path compression pattern one stride at a time in subsequent pipeline stages of said network search engine.
- 12. A method for optimizing path compression of at least one single entry trie table in a data structure in a pipelined hardware bitmapped multi-bit trie algorithmic network search engine wherein said data structure comprises at least one parent trie table entry for said at least one single entry trie table, said method comprising the steps of:
embedding in said at least one parent trie table entry a path compression pattern that represents common prefix bits of a data packet; embedding in said at least one parent trie table entry a skip count that represents a length of said path compression pattern; and eliminating said at least one single entry trie table from said data structure utilizing said path compression pattern and said skip count.
- 13. The method as claimed in claim 12 wherein said step of embedding in said at least one parent trie table entry a path compression pattern that represents common prefix bits of a data packet comprises the steps of:
providing in said at least one parent trie table entry a first trie table entry that comprises a pattern data field, a pointer data field and a code data field for a path compression pattern having a skip count of n strides; and embedding said path compression pattern in said pattern data field of first trie table entry.
- 14. The method as claimed in claim 13 further comprising the steps of:
providing in said at least one parent trie table entry a skip count of n strides; and embedding said skip count of n strides in said pattern data field of said first trie table entry.
- 15. The method as claimed in claim 13 further comprising the steps of:
providing in said at least one parent trie table entry a second trie table entry that comprises a pattern data field, a count data field, a pointer data field and a code data field for a path compression pattern having a skip count from one stride to (n−1) strides; and embedding said path compression pattern having a skip count from one stride to (n−1) strides in said pattern data field of said second trie table entry.
- 16. The method as claimed in claim 15 further comprising the steps of:
providing in said at least one parent trie table entry a skip count from one stride to (n−1) strides; and embedding said skip count from one stride to (n−1) strides in said count data field of said second trie table entry.
- 17. The method as claimed in claim 12 wherein said step of eliminating said at least one single entry trie table from said data structure utilizing said path compression pattern and said skip count comprises the steps of:
searching for a path compression optimization in a plurality of stages of said pipeline network search engine; detecting said path compression optimization in one stage of said pipeline network search engine; suppressing a memory read operation to a memory bank associated with said one stage of said pipeline network search engine; updating a value of said skip count; and sending an unprocessed portion of said path compression pattern and an updated value of said skip count to a next stage of said pipeline network search engine.
- 18. The method as claimed in claim 17 further comprising the steps of:
determining in a stage of said pipeline network search engine that a value of said skip count equals zero; reading memory data from a memory bank associated with said stage of said pipeline network search engine for which said skip count equals zero; providing said memory data to a next stage of said pipeline network search engine.
- 19. The method as claimed in claim 18 further comprising the step of:
placing a trie table that represents a result of said path compression process into a memory bank into which an original trie table was to be placed before said path compression process was performed.
- 20. The method as claimed in claim 17 further comprising the step of:
providing a plurality of PC logic units within said pipelined hardware bitmapped multi-bit trie algorithmic network search engine; and processing said unprocessed portion of said path compression pattern one stride at a time in subsequent pipeline stages of said network search engine.
- 21. The method as claimed in claim 17 further comprising the steps of:
detecting a path compression candidate in a stage of said pipeline network search engine; determining that a path compression optimization with a skip count of N is possible for said path compression candidate; and updating a prefix table in said pipeline network search engine by placing N plus one strides of pattern from a prefix into a current node at a current stage of said pipeline network search engine.
- 22. A method for minimizing utilization of memory space and memory bandwidth in a pipelined hardware bitmapped multi-bit trie algorithmic network search engine, said method comprising the steps of:
optimizing path compression of at least one single entry trie table in a data structure in said network search engine wherein said data structure comprises at least one parent trie table entry for said at least one single entry trie table; embedding in said at least one parent trie table entry a path compression pattern that represents common prefix bits of a data packet; embedding in said at least one parent trie table entry a skip count that represents a length of said path compression pattern; and eliminating said at least one single entry trie table from said data structure utilizing said path compression pattern and said skip count.
- 23. The method as claimed in claim 22 wherein said step of eliminating said single entry trie table from said data structure utilizing said path compression pattern and said skip count comprises the steps of:
searching for a path compression optimization in a plurality of stages of said pipeline network search engine; detecting said path compression optimization in at least one stage of said pipeline network search engine; skipping said at least one stage of said pipeline network search engine associated with said at least one single entry trie table.
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The present invention is related to those disclosed in the following U.S. non-provisional patent applications:
[0002] patent application Ser. No. [Docket No. 02-LJ-062], filed on Dec. 6, 2002, entitled “Apparatus And Method Of Using Fully Configurable Memory, Multi-Stage Pipeline Logic And An Embedded Processor to Implement Multi-bit Trie Algorithmic Network Search Engine”;
[0003] patent application Ser. No. [Docket No. 02-LJ-063], filed on Dec. 6, 2002, entitled “Method For Increasing Average Storage Capacity In A Bit-Mapped Tree-Based Storage Engine By Using Remappable Prefix Representations And A Run-Length Encoding Scheme That Defines Multi-Length Fields To Compactly Store IP Addresses”;
[0004] patent application Ser. No. [Docket No. 02-LJ-064], filed on Dec. 6, 2002, entitled “Method For Increasing Storage Capacity In A Multi-Bit Trie-Based Hardware Storage Engine By Compressing The Representation Of Single-Length Prefixes”; and
[0005] patent application Ser. No. [Docket No.02-LJ-066], filed on Dec. 6, 2002, entitled “A Mechanism To Reduce Lookup Latency In A Pipelined Hardware Implementation Of A Trie-Based IP Lookup Algorithm”.