Claims
- 1. An architecture for a network search engine (NSE) comprising:
one or more levels of a splitting engine configured for dividing a database of prefix entries into multiple sub-databases, each bounded in size between a minimum value and a maximum value; and an array of data processing units (DPUs) coupled to the splitting engine for storing the multiple sub-databases.
- 2. The architecture of claim 1, wherein the one or more levels of the splitting engine are further configured for forming a hierarchical tree structure of the database, wherein the hierarchical tree structure comprises a plurality of nodes extending between a root node and a plurality of leaf nodes, and wherein each of the plurality of leaf nodes corresponds to one of the multiple sub-databases of prefix entries.
- 3. The architecture of claim 2, wherein the one or more levels of the splitting engine form the hierarchical tree structure by assigning a pointer entry to each of the plurality of nodes within the hierarchical tree structure.
- 4. The architecture of claim 3, wherein the one or more levels of the splitting engine comprises at least one storage device for storing at least some of the pointer entries.
- 5. The architecture of claim 4, wherein the storage device is selected from a group comprising: logic gates and registers, Content Addressable Memory (CAM or TCAM) and Random Access Memory (SRAM or DRAM).
- 6. The architecture of claim 5, wherein if the splitting engine comprises only one level, the splitting engine is configured for storing only the pointer entries that are assigned to the plurality of leaf nodes within a single pointer table.
- 7. The architecture of claim 5, wherein if the splitting engine comprises more than one level, the splitting engine is configured for storing all of the pointer entries by forming a different pointer table for each set of pointer entries that reside at each level of the hierarchical tree structure.
- 8. The architecture of claim 7, wherein each level of the splitting engine comprises the same or a different type of storage device.
- 9. The architecture of claim 7, wherein a first portion of the pointer entries are stored within a first level of the splitting engine in a first storage device, wherein a second portion of the pointer entries are stored within a second level of the splitting engine in a second storage device, and wherein a third portion of the pointer entries, overlapping the first and second portions, are stored within the first and the second storage devices.
- 10. The architecture of claim 9, wherein each of the pointer entries residing within the first and third portions comprises a number of bits that is dependent on the level of the hierarchical tree structure at which the pointer entries respectfully reside.
- 11. The architecture of claim 10, wherein each of the pointer entries residing solely within the second portion comprises a number of bits equal to the level of the hierarchical tree structure at which the pointer entries reside minus a number of bits contributed to a parent pointer entry residing within the third portion.
- 12. The architecture of claim 11, further comprising an interface manager configured for translating a search instruction into a search key and sending the search key to the splitting engine, which responds by sending the search key and a corresponding pointer entry to the array of data processing units .
- 13. The architecture of claim 12, wherein the array of data processing units (DPUs) comprises one or more DPU blocks, each of which includes:
a data storage sub-block configured for storing one or more of the multiple sub-databases of prefix entries, or at least a portion thereof; a data extractor sub-block configured for receiving the pointer entry sent from the splitting engine, and if the received pointer entry points to a sub-database within the data storage sub-block, extracting the sub-database pointed to by the pointer entry; and a data processor sub-block configured for determining whether a sub-database has been extracted from the data storage sub-block, and if so, whether the extracted sub-database contains a prefix entry matching the search key sent from the interface manager.
- 14. The architecture of claim 13, wherein the data storage sub-block comprises a storage device selected from a group comprising: Content Addressable Memory (CAM or TCAM) and Random Access Memory (SRAM or DRAM).
- 15. The architecture of claim 13, wherein each sub-database within the data storage sub-block is associated with a unique pointer entry, and wherein only the bits that follow the unique pointer entries are stored as prefix entries within the data storage sub-block.
- 16. The architecture of claim 13, wherein the data extractor sub-block and the data processor sub-block can be implemented as either hard-coded or programmable logic blocks.
- 17. The architecture of claim 16, wherein the data extractor sub-block is further configured for transforming the extracted sub-database into a format that can be read by the data processor sub-block.
- 18. A method for forming a pointer entry database, which can be used for locating a prefix entry within a forwarding database that has been split into multiple sub-databases of bounded size and number, the method comprising:
forming a hierarchical tree structure of the forwarding database, wherein the hierarchical tree structure comprises a plurality of nodes extending between a root node and a plurality of leaf nodes, and wherein each of the plurality of leaf nodes corresponds to one of the multiple sub-databases; assigning a pointer entry to each of the plurality of nodes within the hierarchical tree structure; storing first and second sets of the pointer entries within first and second portions of the pointer entry database, respectively, wherein the first set of pointer entries is configured for locating a child pointer entry within the first or second portions of the pointer entry database, and wherein the second set of pointer entries is configured for locating (i) a child pointer entry within the second portion of the pointer entry database or (ii) the prefix entry within one of the multiple sub-databases.
- 19. The method of claim 18, wherein the first set of the pointer entries each comprise a number of bits that is dependent on the level of the hierarchical tree structure at which the pointer entries reside.
- 20. The method of claim 18, wherein the second set of the pointer entries each comprise a number of bits equal to the level of the hierarchical tree structure at which the pointer entries reside minus a number of bits contributed to a parent pointer entry residing within the first set of pointer entries.
- 21. A method for improving the performance of a network search engine (NSE), the method comprising using one or more levels of a splitting engine to narrow down a search space within the NSE by: (i) dividing a database of prefix entries into multiple sub-databases, each bounded in size between a minimum value and a maximum value, and (ii) after using a search key for searching through one level of the splitting engine, searching only a remaining portion of the search key in a lower level of the splitting engine, or in one of the multiple sub-databases, to reduce power consumption and search latency in the NSE.
- 22. The method of claim 21, wherein said using one or more levels of a splitting engine to narrow down a search space further comprises:
forming a hierarchical tree structure of the database, wherein the hierarchical tree structure comprises a plurality of nodes extending between a root node and a plurality of leaf nodes, and wherein each of the plurality of leaf nodes corresponds to one of the multiple sub-databases of prefix entries; and forming a pointer entry database by assigning a pointer entry to each of the plurality of nodes within the hierarchical tree structure.
- 23. The method of claim 22, further comprising using different methods to search through the pointer entry database created by the splitting engine, wherein the different methods comprise a binary search, a trie search, a linear search or a parallel search of the pointer entry database.
- 24. The method of claim 22, further comprising achieving fixed latency searches for all search key widths through the use of fixed latency memory blocks for storing the pointer entry database and the multiple sub-databases.
- 25. The method of claim 21, further comprising separating data storage and data processing operations through use of a data extractor, wherein said separating improves the performance of the NSE by allowing compressed data to be stored in the database and the use of a hard-coded data processor.
PRIORITY AND RELATED APPLICATIONS
[0001] This application claims benefit of priority to a provisional patent application, serial No. 60/476,033, filed Jun. 5, 2003, which is hereby incorporated in its entirety. This invention also relates to co-pending application Ser. Nos. 10/402,887 entitled “System and Method for Efficiently Searching a Forwarding Database that is Split into a Bounded Number of Sub-Databases having a Bounded Size,” and Ser. No. 10/809,244 entitled “Network Device, Carrier Medium and Methods for Incrementally Updating a Forwarding Database that is Split into a Bounded Number of Sub-Databases having a Bounded Size,” both by common inventors Pankaj Gupta and Srinivasan Venkatachary, both of which are hereby incorporated in their entirety.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60476033 |
Jun 2003 |
US |