1. Field of the Invention
Embodiments relate to hash tables in processor-based devices.
2. Background Art
Hash tables are used in numerous applications, including applications such as network routing, access control, database access, and the like. In network routing and/or access control, for each packet that enters a network router or forwarding device, an input key is formed based upon one or more fields in the packet and that input key is compared to a hash table in order to determine an action to be taken with respect to that packet. As networks grow, the hash tables may grow larger and may consume relatively large amounts of power.
A “hash function” is used to convert input data into fixed size data. The input data may be referred to as the “key.” The hash function may convert the key into a value that maps to a location in a corresponding hash table at which desired data value(s) may be stored or accessed.
A location that is identified by a value produced by a hash function in a hash table may be referred to as a “bucket.” Consequently, the value produced by the hash function may be referred to as a “bucket identifier.” A bucket may store one or more entries. Although a bucket may hold multiple entries, eventually the hash function may associate more keys with a specific bucket identifier than there are entries contained within the corresponding bucket. In such a case, it may be impossible to store a subsequent data value within the bucket. Such a circumstance is referred to, for example, as a “miss.” Consequently, a metric known as the “first miss utilization (FMU)” is used to describe efficiency or other utility of a given hash table and associated hashing techniques. The FMU refers to the first such miss that occurs during population or other access of the hash table.
Hash table performance may be evaluated based upon metrics such as utilization and power efficiency. Utilization of a hash table can be benchmarked by the FMU. Power efficiency is effectively the power consumed in implementing a hash table. The power consumed has two parts to it: leakage power and dynamic power. Leakage power depends upon technology and hash table configuration, and increases with the width of the hash table. Dynamic power is mostly the read power. Writes are generally assumed to be less frequent for most hash table applications.
Conventional hash systems use wider buckets in order to improve the FMU. However, when the bucket width is increased, the power consumption leakage power) is increased. The read power consumption in conventional hash systems can be high because of higher bucket sizes and because the entire wide bucket is read into memory when an entry in the bucket is being accessed.
In order to address the ongoing growth of search table size, requirements for reduced power consumption, and faster packet forwarding, systems and methods are desired for more efficient hash tables.
Reference will be made to the embodiments of the invention, examples of which may be illustrated in the accompanying figures. These figures are intended to be illustrative, not limiting. Although the invention is generally described in the context of these embodiments, it should be understood that it is not intended to limit the scope of the invention to these particular embodiments.
While the present disclosure is described herein with reference to illustrative embodiments for particular applications, it should be understood that the invention is not limited thereto. Those skilled in the art with access to the teachings herein will recognize additional modifications, applications, and embodiments within the scope thereof and additional fields in which the invention would be of significant utility.
Embodiments are directed to improving the utilization and power efficiency of hash tables in processing devices. Some embodiments provide for a hash table implementation where each hash table is configured with spare buckets, each of which can be logically chained to one or more buckets in the hash table. By providing, for each of the buckets within the bucket identifier range, one or more chained spare buckets, the number of hash entries that can map to a particular bucket identifier is increased. The increased number of hash entries associated with individual buckets leads to improved FMU.
Moreover, upon access to the hash table, some embodiments provide for reading a reduced set of entries when a bucket is selected. For example, instead of reading all entries of a bucket to which a hash function mapped, exactly one entry can be read using embodiments disclosed herein. Reading a single entry instead of the entire bucket results in substantial savings in dynamic power. Moreover, because access can be made to single entries, the hash table can be implemented with a narrower width (e.g. hash table width in physical memory set to the width of a single entry). Having narrower hash tables reduces the leakage power consumed. Thus, the embodiments disclosed herein provide for hash table implementations that result in improved utilization and power savings.
Hash table 104 is configured to store tables of data entries, such as, but not limited to, one or both forwarding table entries and ACL entries, or other types of data that may be looked up by one or more applications.
Processor 108 can be a central processing unit (CPU) or other processor, and memory 110 can include any of, but is not limited to, dynamic random access memory (DRAM), static random access memory (SRAM), and hardware registers. Processor 108 is configured to use hash table controller 102 for some or all of its search operations. For example, processor 108 may rely upon hash table controller 102 for all of its forwarding table lookups and ACL lookups. Upon receiving a packet for which a table lockup, such as a forwarding table lookup or ACL lookup, is required, processor 108 may submit the search to hash table controller 102. Processor 108 may form a search key (e.g., search expression or search bit string, also referred to as a lookup key) from the packet's header fields which is then submitted to hash table controller 102.
Processor 108 transmits a search key to hash table controller 102 over interface 148 and receives an action or a lookup table entry (e.g. data) that matches the search key from hash table controller 102. Hash table controller 102 communicates with control table 106 and hash table 104 to obtain the lookup entry stored in hash table 104 corresponding to the search key received from processor 108. Processor 108 may also transmit data to be stored in hash table 104. Hash table controller 102 received the data to be stored, and communicates with control table 106 and hash table 104 to store the data in hash table 104 and update one or both, hash table 104 and control table 106 as required.
In the hash table controller 102, the search key is first processed in hash function module 120. Hash function module 120 may include one or more hash functions that take a search key as input, and determine a corresponding bucket identifier. Examples of hash functions are well known in the art, and any conventional or new hash function may be used in hash functions 132 and 134. In the example embodiment shown in
Consequently, hash function module 120 may be configured to implement a dual hashing technique in which each hash function 132 maps a received key to a different bucket identifier corresponding to a portion (e.g. bucket) of the hash table 104. The first hash function 132, for example, may be different from the second hash function 134. Consequently, each hash function 132, 134 may be operable to input a single key and output a corresponding bucket identifier value, thereby resulting in two different bucket identifier values.
Specific examples of the hash table 104 are illustrated in detail below with respect to
In the example of
In the example of
The hash resolution manager 124 operates to resolve the location in which a particular key is present. In an embodiment, the location of a particular key is resolved to a bucket and a particular entry in that bucket, represented respectively by a bucket identifier and a bucket entry identifier. As described below, in some embodiments, the control table 106 stores pivot information and entry identifying bit patterns (e.g. test bit positions or TBPs) for some buckets. The TBPs for a bucket are unique bit strings for each of the keys stored in that bucket. The hash resolution manager 124 can operate to determine the TBPs and control words associated with buckets.
In the example of
In an example embodiment, one or more of hash table controller 102, hash table 104, and control table 106 may be implemented as hardware components of the apparatus 100. For example, hash table 104 and control table 106 may be constructing using hardware memories (e.g., registers and associated control elements). That is, such control elements may be considered to be included within the hash table controller 102, e.g., hash table operations manager 122 and the hash resolution manager 124. Further, the hash function module 120 also may be implemented in hardware, using components for executing the hash functions 132, 134.
Thus, in some implementations, it is possible to configure hash table controller 102 entirely in hardware to perform the functions described herein with respect to hash table 104 and control table 106. Nonetheless, hash table controller 102, or portions thereof, may be configured in software. In addition to items 102-108, hash system 100 may include one or more other processors and/or logic blocks. In some embodiments, hash 102 is formed on a single chip. System 100 may be part of a bridge, switch, router, gateway, server proxy, load balancing device, network security device, database server or other processing device.
According to some embodiments, any spare bucket can be chained to one or more buckets in first portion 204. Moreover, spare buckets can be chained to one another, providing a potentially large expansion of the number of entries that can be mapped to a bucket in the hash table, thereby substantially improving utilization.
As illustrated in the logical view of hash table 202 in
As described above in relation to the logical view, the physical view of hash table 220 can be viewed as a first portion 222 of buckets that map to a bucket identifier and a second portion 224 of spare buckets. Bucket X begins at 226 and spans four entries. Bucket Y, starting at 228, is chained 230 to bucket X.
Control table 304, in addition to the chaining information (e.g. bucket 600 chained to bucket 2048) may also include other information that facilitate the resolution of bucket identifiers. As shown in 318, a pivot value may be stored in the corresponding entry in the control table, where the pivot provides a quick and efficient technique to determine whether a key that maps to a particular bucket in the first portion 308 is actually stored in the first portion 308 or in a chained spare bucket. According to an embodiment, the pivot is configured such that all keys having a value less than the pivot is in the corresponding bucket in the first portion 308 and all keys having a value equal to or greater than the pivot are stored in the corresponding chained spare bucket. Moreover, control table 304 can also include TBPs and control words that provide for identifying the precise entry corresponding to a search key. TBPs and control words are further described in relation to
Table 602 in
Resolution tree 604 illustrates an organization of the keys A-D based upon the TBPs 602. The ovals represent bit positions and triangles represent the keys. As illustrated in resolution tree 604, each key is represented by a leaf node (i.e., a node with no children). The tree can be configured to be of any shape.
The root of resolution tree 604, corresponding to bit n, indicates that of the four entries (A-D), only A has a 0 in bit position n, and the rest have a 1 in that position. Similarly each node may have two child branches classifying entries based upon their respective values at the corresponding bit positions. The tree organizing the bit positions can be used to determine the layout of the control word (e.g. 804 shows the formation of a control word) that provides for locating individual entries in hash buckets.
It should also be noted that, storing three bits (e.g., bits n, m and l) is sufficient in order to uniquely identify a key in a bucket of 4 entries.
Table 622 and corresponding logical tree 624 shown in
Resolution tree 702 includes 7 nodes representing 7 TBPs, and 8 leaf nodes representing the respective entries (e.g. stored keys). Tree 702 may be formed in the same or similar manner to that described with respect to
According to an embodiment, the bit positions and identifiers for the keys can be arranged in a control word (e.g. 804 illustrates the formation of a control word) in the sequence shown by the dotted-arrows on tree 702. In the example shown, starting at the root, branches corresponding to bit position value of 1 (alternatively, follow bit position value 0) are traversed until a key at a leaf node is encountered. The leafs (or the identifiers for the keys represented by them) are selected for including in the control word in the order that they are encountered during the tree traversal. After the first encountered leaf, in sequence, the tree leaf nodes or TBPs of the subtrees with the greatest depths having a longest common traversed path with the immediately preceding selected leaf is encountered.
For example, using tree 702 and following the path marked by the dotted arrows (the sequence of traversal is also indicated by the shaded numbers next to nodes and leafs), the following ordering of the entries may be obtained (entries identified by the shaded number adjacent to the tree leaves): 5, 6, 8, 9, 10, 13, 14, and 15.
When the bucket is empty (i.e. no entries in the bucket), the control word is empty and there is no tree, as shown in 812.
As shown in 814, key A is added to the hash bucket as the first entry, the corresponding resolution tree is formed based upon a selected bit position in A With a value of 0, and the control word is updated by storing the selected bit position (“first selected bit position”) indicator (e.g., TBP0) and an identifier for key A. Note that the keys A-D used for the example in
Next, as shown in 816, key B is added. Key B differs from key A at the first selected bit position, and therefore is simply added as the second branch of the current root of the tree. Accordingly, an identifier for B is added to the control word following the first selected bit position indicator, and the identifier for key A. The order in the control word, at 816, reflects the order of tree traversal: root, right branch (e.g. branch for bit value 0) to leaf A, and left branch to leaf B. In another embodiment, key A and B may not differ in the current first selected bit position, and therefore the current first selected bit position may be changed to a bit position in which keys A and B can be distinguished. If such a change to the first selected bit position is made, the order in the control word may be either first selected bit position, key A and key B, or first selected bit position, key B and key A, depending on which of key A and B have a value 0 at the first selected bit position.
Next, as shown in 818 and alternatively in 818′, key C is added to the bucket following A and B. 818 shows the tree and control word when C has a “11” in the first selected bit position and a second selected bit positions. Keys B and C differ in the second selected bit position. Thus, a subtree with the second selected bit position as root and keys B and C as child nodes is added as to the root node (the first selected bit position) of the current tree. Accordingly, in the control word, the currently existing identifier for key B is removed or overwritten for the subtree with the second selected bit position as root. The traversal for the tree at this stage may be, the first selected bit position, key A, the second selected bit position, key B, and key C.
818′ shows the tree and control word when C has a “00” in the first and second selected hit positions. Keys A and C differ in the second selected bit position. Thus, a subtree with the second selected bit position as root and keys A and C as child nodes is added to the root node (the first selected bit position) of the current tree. Accordingly, in the control word, the entries overwritten for the subtree with the second selected bit position as root. The traversal for the tree at this stage may be, the first selected bit position, the second selected bit position, keys C, A, and B. It should be noted that the first selected bit position in 818 has moved to become the second selected bit position in 820, and the newly determined TBP is represented as the first selected bit position.
Assuming the current configuration is as shown in 818, when key D is added to the bucket, the control word and tree may be as shown in 820 or, alternately, as 820′, 820 illustrates when key D differs from key A at a third selected bit position. The control word reflects the traversal of the tree: the first selected bit position, the second selected bit position, key D, key A, the third selected bit position, key B and key C.
820′ illustrates when key D differs from key C at the third selected bit position. The control word reflects the traversal of the tree: the first selected bit position, key A, the second selected bit position, key B, the third selected bit position, keys D and C. Note that the each of the first, second and third selected bit positions may represent, for example, any one of bits 0-127 in a 128-bit key.
At operation 902, a key (e.g. insert key) is received. For example, a key is received at the hash table controller 102 from host processor 108.
At operation 904, a bucket identifier is determined. The bucket identifier may be determined by a hash function, such as one of hash function 132 or 134 shown in
At operation 906, a control table is accessed using he determined bucket identifier. The control table may be a control table such as control table 106 shown in
At operation 908, it is determined whether the target bucket in the hash table is full. The “target bucket” is the bucket in the hash table that maps to the determined bucket identifier. According to an embodiment, the determination of whether the target bucket is full may be made based upon the control table. In some embodiments, the presence or absence of chain information (e.g. whether or not the bucket is chained to a spare bucket) can be used for the bucket fall/not full determination. In other embodiments, factors such as the number of TBPs representing the keys that are stored in the control table or a flag indicating whether or not the corresponding bucket is full may be used in the determination.
If, at operation 908, it is determined that the target bucket is not full, then at operation 910, a new TBP is determined for the key that is to be inserted in the hash table. The determination of a TBP for a newly added key is described above in relation to
At operation 912, the control table is updated with the TBP for the new entry. As described above, a corresponding control word for each bucket of the hash table is maintained in the control table (or in the hash table). The formation of the control word is described above in relation to
At operation 914, the new entry is added to the bucket as determined by the bucket identifier.
if, at operation 908, it was determined that the target bucket was full, then method 900 proceeds to operation 916. At operation 916, the new key is stored in a spare bucket. Spare buckets were described in relation to
The spare bucket may have already been selected (e.g. chained to the target bucket) in a previous operation. If the spare bucket has not yet been identified (e.g. the new entry is the first entry for the spare bucket), then select a spare bucket based upon some configured criteria. For example, the spare bucket with the lowest bucket index may be selected.
At step 918, it is determined whether the resolution of the bucket is to be made based only upon TBPs or whether it is to be made based upon TBPs and a pivot. In some embodiments, this may be a configuration option and a given hash system would operate only in one of the modes of resolution. In another embodiment, based upon the presence or absence of the pivot, applications can choose either mode.
If, at operation 918, it is determined that the hash table is to be resolved using only the TBPs, method 900 proceeds to operation 920. At operation 920 TBPs are determined for the target bucket and the spare bucket together, and stored together.
If, at operation 918, it is determined that the resolution is to be based upon the TBPs and the pivot, then method 900 proceeds to operation 922. At operation 922, a pivot is determined. As described above, a pivot may be selected so that all key values that map to the target bucket but are less than the pivot is stored in the target bucket and the other key values that map to the target bucket are stored in the spare bucket. The determination of a pivot is described above with respect to
In some embodiments, if no pivot can be determined based on the current distribution of keys in the target bucket and the spare bucket, a reordering of at least some of the entries in the target bucket and the spare bucket may be performed. The pivot can then be determined based upon the reordered distribution of keys. The reordering may include software-based reordering of the keys.
At operation 924, separate TBP sets are determined and stored for the target bucket and spare bucket. The determination of TBPs is described above in relation to
At operation 1002, a key (e.g. search key or insert key) is received. For example, a key is received at the hash table controller 102 from host processor 108.
At operation 1004, a bucket identifier is determined. The bucket identifier may be determined by a hash function, such as one of hash function 132 or 154 shown in
At operation 1006, a control table is accessed using he determined bucket identifier. The control table may be a control table such as control table 106 shown in
At operation 1008, the TBPs stored in the control table are looked up. The lookup includes comparing a corresponding bit pattern derived from the search key to the TBPs stored in the corresponding entry in the control table.
At operation 1010, it is determined whether or not the compare operation resulted in a hit. It should be noted that, the bit pattern of the search key would at most match one TBP in the set of TBPs stored for the bucket. In some embodiments, the operation 1010 always returns a hit.
If at operation 1010, it is determined that the compare is a hit, then method 1000 proceeds to operation 1012. At operation 1012, the target bucket is identified. In this embodiment, the identification of the target bucket is based upon the TBPs. The target bucket is identified based upon which TBP (e.g. TBP for target bucket or TBP for spare bucket) is hit.
At operation 1014, the location of the matching entry within the bucket is identified. The location of the matching entry may be represented as a bucket entry identifier. This identification may be based upon the control word stored in the corresponding bucket of the control table. Control words are described above in relation to
Having determined the target bucket and the location of the entry within the target bucket, at operation 1016, the entry is accessed in the hash table.
Following operation 1016, at operation 1018, the accessed key and the received key (e.g. search key) are compared to confirm the hit/match.
If, at operation 1010, it is determined that there was no hit for the search key in the TBPs, then at operation 1020, it is determined that the search key is not present in the hash table.
At operation 1102, a key (e.g. search key or insert key) is received. For example, a key is received at the hash table controller 102 from host processor 108.
At operation 1104, a bucket identifier is determined. The bucket identifier may be determined by a hash function, such as one of hash function 132 or 134 shown in
At operation 1106, a control table is accessed using the determined bucket identifier. The control table may be a control table such as control table 106 shown in
At operation 1108, the pivot is looked up in order to determine the bucket.
At operation 1110, the TBPs stored in the control table are looked up based upon the pivot. The pivot is used to identify which set of TBPs are to be compared to the search key. A separate set of TBPs is stored for the target bucket and the spare bucket. The lookup includes comparing a corresponding bit pattern derived from the search key to the TBPs stored in the corresponding entry in the control table. The forming of TBPs was described above in relation to
At operation 1112, it is determined whether or not the compare operation resulted in a hit. It should be noted that, the bit pattern of the search key would at most match one TBP in the set of TBPs stored for the bucket. In some embodiments, the operation 1112 always returns a hit.
If, at operation 1112, a hit is detected, then at operation 1114, the matching entry within the bucket is identified. This identification may be based upon the control word stored in the corresponding bucket of the control table. Control words are described above in relation to
Having determined the target bucket and the location of the entry within the target bucket, at operation 1116, the entry is accessed in the hash table and compared to the search key, for example, to confirm the hit.
If, at operation 1112, a hit is not detected (e.g. a miss occurs) than at operation 1118, it is determined that the search key is not present in the hash table.
As described above, the pivot enables a lookup operation to determine which bucket index (e.g. target bucket or chained spare bucket) is to be selected. The pivot can be a based upon a small portion of the keys which can be used to uniquely distinguish among the keys in the target bucket and the corresponding chained bucket. According to an embodiment, a pivot may be selected based upon a technique such as Group Vector Correlation. Method 1200 illustrates a method of determining a pivot for a chained bucket.
The keys stored in both buckets, the target bucket and the chained bucket, are considered. At operation 1202, each key is divided to groups of k-bits each. K can be preconfigured.
At operation 1204, corresponding ones of the groups are clustered to group vectors. For example, the first k-bits of each key belongs to a first cluster, the second k-bits of each key belongs to a second cluster, and so on. Thus, each group vector includes groups of k-bits where the k-bits are from the same bit positions. For example, group vector 0 may include groups of k-bits where k=0 . . . 15.
At operation 1206, a correlation measure is determined for each of the group vectors. The correlation can be based upon the number of unique values in the group vector.
At operation 1208, it is determined whether any of the group vectors have a number of unique values that is greater than or equal to half the number of keys to be resolved. For example, if a pivot is being sought for a target bucket and a spare bucket, each having 4 entries, then a group vector with four or more unique values is selected.
At operation 1212, one of the group vectors satisfying the test condition of operation 1208 is selected for deriving the pivot. The group vector selected may be any of the group vectors that satisfied the test condition. According to an embodiment, the selected group vector has the highest number of unique entries.
At operation 1214, the pivot is determined based upon the selected group vector. According to an embodiment, one of the values in the selected group vector can be chosen such that approximately half of the values in the group are less than the chosen value and the other half is equal to or greater than the chosen value.
At operation 1216, the chosen pivot value and the chosen group vector identifier is stored in the corresponding entry of the control table. The chosen group vector identifier is stored so that, at lookup time, the corresponding bits can be considered when determining the value of the search key to be compared against the pivot.
After operation 1216, method 1200 terminates.
If, at operation 1208, it is determined that no group vectors have the required number of unique values, then at operation 1210, it is determined that a pivot cannot be determined. Upon the determination that no pivot is available, chaining may not be performed for that pair of buckets. After operation 1210, method 1200 terminates.
The method works by finding bits that differ in the group of entries and stores the positions of such bits. These stored bit-positions are referred to as Test Bit Positions or TBPs. The essential characteristics of the algorithm are: no false negatives, free of aliasing issues, and scalable for multi-way bash tables.
Method 1300 may be performed upon the initiation of an insert operation. Method 1300 starts at operation 1302. At operation 1302, the existing TBPs for the corresponding bucket(s) are read and applied to the new key.
At operation 1304, a matching entry is determined. There can be only one existing entry that matches the new entry to be added at all of the existing TBPs. The new TBP to be determined is the one that differentiates these two entries. This is the basic principle of operation of the incremental TBP update method.
At operation 1306, a new TBP for the new key is determined in order to differentiate the new entry from the matched entry. In embodiments, only a single one of the existing entries are accessed in order to update the control word. For example, the sole matching entry is read out and a TBP differentiating it from the incoming entry is stored.
At operation 1308, the control word is updated for the corresponding bucket in accordance with the revised set of TBPs. A control word is maintained for each hash bucket in the hash table. This control word consists of the TBPs and identifiers (e.g. pointers) to the individual entries in the hash bucket as resolved by the TBPs. The order in which the TBPs and the entry identifiers are specified in the control word is a function of a resolution tree encountered for the particular hash bucket. Updating the control word may include forming a resolution tree based upon the revised set of TBPs and traversing that resolution tree in order to determine how the TBP and identifiers to corresponding entries are to be stored in the control word. The forming of the resolution tree and the traversal of it to determine the control word is described above in relation to
As would be appreciated by one of skill in the art, implementations of the various techniques described herein may be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. Implementations may implemented as a computer program product, i.e., a computer program tangibly embodied in an information carrier, e.g., in a machine-readable storage device, for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers. A computer program, such as the computer program(s) described above, can be written in any form of programming language, including compiled or interpreted languages, and can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
Method steps may be performed by one or more programmable processors executing a computer program to perform functions by operating on input data and generating output. Method steps also may be performed by, and an apparatus may be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
The representative functions of the has system described herein may be implemented in hardware, software, or some combination thereof. For instance, methods 900, 1000, 1100, 1200 and 1300 can be implemented using computer processors, computer logic, ASIC, FPGA, DSP, etc., as will be understood by those skilled in the arts based on the discussion given herein. Accordingly, any processor that performs the processing functions described herein is within the scope and spirit of the present invention.
The present invention has been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed.
The foregoing description of the specific embodiments will so fully reveal the general nature of the invention that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present invention. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance.
The breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.