The present disclosure is related to systems and methods that facilitate determining criteria for selection of data for a non-volatile cache. In one embodiment, host read operations affecting a first logical block address of a data storage device are tracked. The data storage device includes a main storage and a non-volatile cache that mirrors a portion of data of the main storage. One or more criteria associated with the host read operations are determined. The criteria are indicative of future read requests of second logical block address associated with the first logical block address. Data of the at least the second logical block address is copied from the main storage to the non-volatile cache if the criteria meets a threshold.
These and other features and aspects of various embodiments may be understood in view of the following detailed discussion and accompanying drawings
In the following diagrams, the same reference numbers may be used to identify similar/same components in multiple figures.
In the following description of various example embodiments, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration various example embodiments. It is to be understood that other embodiments may be utilized, as structural and operational changes may be made without departing from the scope of the claims appended hereto.
The present disclosure is generally related to hybrid data storage devices, such as solid-state, hybrid, hard disk drives (HDDs). Generally, a hybrid HDD utilizes a combination of non-volatile, solid-state memory (e.g., flash memory) and conventional HDD media (e.g., magnetic disks) to provide performance approaching that of a solid-state drive (SSD), yet with the costs commonly associated with HDDs In the present disclosure, a hybrid drive architecture is described that uses an SSD as a non-volatile cache for HDD media, and is sometimes referred to as an HDD/SSD hybrid storage device. However, it will be appreciated that the concepts described herein may be applicable to any mixed-storage-media hybrid device that utilizes similar caching mechanisms.
An HDD/SSD hybrid drive combines features and technologies of conventional HDDs and SSDs. A hybrid drive may include a main store (e.g., one or more rotating magnetic disks), a volatile primary cache, and a non-volatile secondary cache. The primary cache may use a relatively small amount of volatile random access memory (RAM), while the secondary cache may uses a relatively large amount of non-volatile solid state memory that is kept coherent with the main store. The description below is directed to, among other things, a scheme for selecting data to be moved into the secondary cache. The scheme is intended to optimize performance under certain environments in which the drive may be highly/fully utilized, such as enterprise storage.
In reference now to
The apparatus 102 includes a host interface 112 that communicatively couples the apparatus 102 to a host 114. The host interface 112 at least provides a mechanism that allows the host 114 to store and retrieve information to/from the main storage media 104. The host interface 112 may utilize standard communication interfaces and protocols, such as SATA, SCSI, eSATA, SAS, USB, etc. The host interface 112 provides both a standard means of communication between the apparatus 102 and host 114, as well as abstracting operations of the controller 110 and media 106. For example, the host 114 may access data by way of logical block addresses (LBAs) that are mapped internally to a different physical addressing scheme, e.g., based on cylinders, heads, and sectors.
The controller 110 may utilize various internal adaptations of the apparatus 102 to improve performance or otherwise provide efficient operation with the host 114. For example, the apparatus 102 may include a volatile random-access memory (RAM) 116, such as Dynamic-RAM (DRAM), and non-volatile RAM (NVRAM) 118, such as NAND flash memory. These memory devices 116, 118 may have a number of different uses, such as acting as temporary and permanent stores for data needed by the controller 110 during operation. The memory devices 116, 118 may also be used for caching host data, as represented by respective caches 120, 122.
Data retrieved from media 104 or stored to media 104 can be held in one or more caches 120, 122 to improve throughput. The caches 120, 122 have faster access and retrieval times than the media 104, although generally with less storage capacity. While there is some processing and data transfer overhead in using the one or more caches 120, 122, the faster media used by the cache can significantly improve overall performance of the apparatus 102 under many conditions.
In this configuration, the non-volatile cache 122 acts as a secondary cache, being faster but smaller than the main storage media 104. The volatile cache 120 is a primary cache, being faster but smaller than the non-volatile cache 122. Generally, the terms “primary” and “secondary” refer generally to an immediacy in time and priority to the host interface 112. For example, current read/write requests from the host 114 may be processed via the primary cache 120 to enable host commands complete quickly. Some of the data stored in the primary cache 120 may either be moved to the secondary cache 120 or synched up with the main store 104 as new requests come in.
The features discussed below for secondary caches may also be applicable in configurations where a non-volatile cache is not “secondary” in relation to other caches, such as where a non-volatile cache that is maintained in a parallel, non-hierarchical relation to a volatile cache, or used without a volatile cache at all. Similarly, the secondary cache may use volatile memory instead of non-volatile memory. In one configuration, non-volatile memory may be chosen as secondary cache media for cost considerations, and not for data retention upon power loss. As a result, the system may be configured not to store state information that allows recovery of the secondary cache should power be lost. In such a configuration, it may be possible to substitute volatile memory in place of the non-volatile memory without loss of functionality, because the data retention capability of the non-volatile memory is not being used.
The secondary cache 122 in this example may optionally be read-only, in that only data marked for read operations by the host 114 are placed in the secondary, non-volatile cache 122. In such a configuration, data marked for writing are sent directly to the main storage 104, either directly from the host interface 112 or via the primary, volatile cache 120. In some applications and configurations, it has been found that improvements in write speed using the NVRAM 118 instead of the main store 104 may not be sufficient to justify the overhead needed to track and sync write operations between the secondary cache 122 and main store 104.
The apparatus 102 includes functional modules 124 that perform various functions related to moving data in and out of the caches 120, 122. These modules 124 may include any combination of custom logic circuitry, general-purpose processors/controllers, firmware, and software. The modules 124 include a history tracking module 126 that tracks host operations affecting the data storage device, such as host read requests that are received over some period of time. The history tracking module 126 may be coupled to one or more of the host interface 112 and controller 110 in order to gather this data.
An analysis module 128 is configured to determine one or more criteria associated with the host operations. The criteria may be at least indicative of future read requests of certain logical block addresses, such as data that is not yet requested but has a likelihood of being requested in the future. The analysis module 128 may be coupled to the history tracking module to obtain this information. A caching module 130 is configured to cause data from the main storage 106 to be copied to the non-volatile cache 122 if a particular criterion meets a threshold. The caching module 130 may be coupled to at least the analysis module 128 to obtain the criterion, as well as being coupled to the main storage 104, controller 110, host interface 112, and caches 120, 122 to cause these transfers of data to the cache 122. Any combination of the history data, criteria, and current thresholds can be stored in a database 132 that is coupled to any of the modules 124, as well as being coupled to other components of the device 102.
One goal of the secondary cache design is to minimize access to the main storage for read operations that fall within particular access patterns. These patterns may identify some amount of data that is likely to be requested in the future, and which can be move to the secondary cache before it is requested. For example, some host data operations such as the reading of a commonly used file, may involve predictable data access patterns (e.g., sequential data reads over contiguous address ranges, repeated access to a particular range of addresses) and so may benefit from secondary caching. Other operations, such as benchmarking tests or updating random pages of virtual memory, may not benefit from secondary caching. For example, the overhead incurred in moving small blocks data in and out of the secondary cache may override any improvement in data transfer speed provided by the cache.
The embodiments described below may retrieve host-requested data from a secondary, non-volatile cache to avoid overhead in retrieving data from the main storage media. The data may be selected for storage into the secondary cache based on, among other things, a likelihood of being hit on a read request after a miss in the primary cache. As such, data in the secondary cache may be selected to avoid overlap with valid data on the primary cache. The data may also be selected so as to avoid data that can be predicted as conforming to a “raw” access pattern from the main store, e.g., 100% sequential or random reads. A 100% sequential read may be serviced nearly as efficiently from the main store, and/or may be the type of data that has low likelihood of being re-requested (e.g. streaming media). A 100% random read may also have low likelihood of LBAs being requested again, and so the overhead of caching large amounts of random LBAs may offset any benefit in the off-chance a previously requested LBA is requested again.
The embodiments described herein have features that may be configured for use under enterprise workloads. Computing resources of enterprise servers may be heavily utilized. For example, processors, disks, network interfaces, etc., of an enterprise server may be used at steady, relatively high activity levels for long periods of time. As this pertains to persistent data storage access, it has been found that for enterprise workloads, it may be better to cache read data than write data, and may also be better to cache speculative read data rather than requested read data.
One technique that may be employed to efficiently utilize the non-volatile, secondary cache is to determine a “cache importance metric” to be used as a criterion for whether data should be placed in the secondary cache. This may include data moved from the primary cache to the secondary cache and data moved directly from main storage to the secondary cache. The cache importance metric may use any combination of the following considerations: 1) frequency of recent host read requests for an LBA; 2) spatial distribution of recent host read LBA counts; 3) predicted disk access penalty for LBA cache miss; and 4) write frequency.
The first consideration noted above, frequency of requests, indicates regions in memory experiencing “hot,” activity (e.g., recent, repeated accesses). Hot read activity can, by itself or in combination with other factors, indicate LBAs that may be more likely to be read again. Similarly, the second condition, spatial distribution, relates to how close in the LBA address space (e.g., “spatial locality”) recent requests are grouped. Under some conditions, LBAs in or near recent active LBAs ranges may themselves be read in the future. The third condition, disk access penalty, relates to the characteristics of the architecture, such as particulars of the main data store. For example, if a frequently accessed range of LBAs is stored in physically diverse sectors on the main data store (e.g., belonging to a highly fragmented file), it may take longer to retrieve this than similarly accessed data that is stored in a contiguous range of physical sectors. Finally, the fourth consideration, write frequency, can identify data that is less likely to benefit from being in the secondary cache. For example, if the secondary cache only stores read data, the importance metric of an LBA may be reduced based on write activity targeted to the LBA.
An example of determining at least the first and second considerations for particular LBA ranges is shown in the block diagram of
In this example, the entries 202-206 accumulate counts of host read requests targeted to the associated LBA ranges 208-212. For example, read request 214 affects a range of nine addresses within range 208, and so counter of entry 202 is incremented 216 by nine in response to the request 214 being fulfilled. The read request 214 may trigger incrementing 216 the count even if the requested LBAs are already stored in the secondary cache. For example, recent levels of activity on secondary cache data may be indicative that uncached neighboring LBAs (e.g., in an adjacent address range) might benefit from caching. Tracking continued activity for secondary cached ranges may also help determine whether those regions should remain in the cache. In the event some data needs to be ejected from the secondary cache, it may be preferable to eject cached data that has exhibited relatively lower levels of recent activity.
The size of the ranges 208-212 may be preconfigured before runtime or determined at runtime. The size of the ranges 208-212 may be set based on characteristics of the primary and/or secondary caches. For example, if the secondary cache organizes cached content into predetermined sized lines, it may be useful to match the size of the ranges 208-212 to the cache line size(s). In this way, cached lines may be treated as a unit in the zone table 200, being moved in and out of the secondary cache based on values of the counter entries 202-206.
Practical considerations may limit the choice of range sizes. For example, if too fine of a granularity is chosen, the zone table 200 might grow to be too large for the available memory. This a trade-off that may be considered when deciding whether to use a zone table 200 or a counting Bloom filter implementation as discussed below. When using a zone table 200, the zones may have relatively large-granularity (e.g., ˜1 GB). Counting Bloom filters tend to provide high-granularity tracking using a relatively small amount of memory, while the zone table may perform better with medium-granularity tracking (e.g., on the order of the capacity of the secondary cache).
The caching importance metrics may be based on individual counter entries 202-205, and/or combinations of the counter entries 202-205. For example, activity that occurs on the boundary between two adjacent address ranges 208-212 may not indicate significant activity in either of the ranges compared to activity as a whole. However, if two adjacent ranges 208-212 are considered together in view of other adjacent ranges (e.g., sum of counters 202 and 203 compared to sum of counters 205-206) this may be enough to raise the cache importance metric of the adjacent ranges together as a unit.
A number of the zone table entries 202-206 with the highest counts (e.g., “hot” range of LBAs) may also be linked to a sorted list 218. The list 218 is sorted by access counts, so that “hotness” attributes can be determined quickly. A cache analysis module (e.g., module 128 shown in
In response to some event, the entire zone table 200 may be decayed, e.g., by dividing all the counts in half. This avoids having the counters saturate their maximum values over time. This may also cause more recent activity to have a greater influence in the sorted list 218. The event that triggers the decay may include any combination of the passage of time, value of one more counter entries 202-206, the sum of the counter entries, etc. Assuming this decay is applied evenly, this should not affect the sorted list 218, because relative magnitude of the entries 202-206 should remain the same right after the entries are decayed. Some aspects of the entries 202-206 or list 218 may be stored elsewhere (e.g., database 132 in
The zone table 200 enables determining “hot” zones, e.g., those that are accessed more than most other zones. High spatial locality is detected by the presence of hot zones within particular ranges 208-212 that are not cooling significantly due to recently activity. For example, any of the ranges 208-212 may exhibit brief bursts of high activity over time, but those that sustain high activity over larger time spans (or at least more recently) may be more indicative of spatial locality of the recent activity. The restriction of the activity to a particular range 208-212 (or combination of ranges) provides clues about spatially concentrated activity, which may increase cache importance for those ranges.
The use of a table 200 is only one example of how caching priority metrics may be determined. An alternate embodiment uses counting Bloom filters to track usage, and is shown in the block diagram of
In a conventional Bloom filter, an element cannot be removed from the filter. This is because two or more elements may set the same bit to one, and so changing a bit back to zero risks unintentionally removing other elements from the filter. To resolve this, a counting Bloom filter (CBF) uses an array of counters instead of a bit array to store the hash results. Adding an element to a CBF involves incrementing the counters for the k-hash results instead of changing a bit from zero to one. Checking whether an element is in the CBF set is similar to a conventional Bloom filter, in that each element is checked for a non-zero value for the k-hash value results. Unlike a conventional Bloom filter, an element can be removed in a CBF by decrementing the appropriate counters.
In
When first initialized, all CBFs 302-305 start off empty, and one is designated as the “current” CBF (e.g., CBF 305 in
When the current CBF 305 reaches a predefined fullness threshold (which may include a counter being at maximum, a total of the counters reaching a threshold, etc.), an empty CBF then becomes current. The fullness threshold may be defined as a maximum number of non-zero counts (e.g. 50% of CBF entries). Whenever the last remaining empty CBF is assigned as current, the oldest non-empty CBF is designated “emptying” (e.g., CBF 302 in
A CBF is considered to be “active” if it is not empty and not emptying, e.g., CBFs 303-305 are active in this example. A key 310 is considered to be a member of the host read history if at least one active CBF 303-305 has a non-zero value for all of its hashed counters. A key 310 may also be used to query 315 the active CBFs 303-305, e.g., to see if an LBA or LBA range have exhibited recent activity.
As keys are added to the current CBF 305, a distribution 316 of hashed entry counts for that CBF may be updated. The hash entry distribution 316 tracks the number of hashed entry counters (vertical axis) in the current CBF with a specific count value (horizontal axis). The maximum value m on the horizontal axis may represent the maximum counter value, or some other threshold beyond which the CBF may be retired (e.g., 50% of maximum). The hash entry distribution 316 may be used to determine when the percentage of non-zero counts in the current CBF has exceeded the fullness threshold. Because the hash entry distribution 316 tracks the number of current CBF counts with a value of zero (value 316A), the number of non-zero counts is the total number of counts minus the number of zero counts 316A in the current CBF.
The hash entry distribution 316 may also be used to estimate spatial locality of host reads. When spatial locality is low, then the keys 310 added to the host read history are not repeated very often, and the corresponding hashed entries are evenly distributed. This may be expressed as a percentage (TP %) of non-zero count values that are less than a threshold (LCV). Exact definitions of TP and LCV can be tuned to the specific application. For example, low spatial locality may be defined if 95% of all non-zero hashed counts in the current CBF are less than three (TP %=95% and LCV=3).
The hash entry distribution 316 can give a first-order approximation of whether or not the most recent host reads have low spatial locality. An additional enhancement may be provided by looking not only at the current distribution, but at a distribution history that allows spatial locality trends to be spotted. The distribution history may be updated as each new key is added to the current CBF 305, and captures a partial snapshot of the hash entry distribution 316 at those times. The distribution history need not track a complete distribution of all count values over time, only those that are needed for the spatial locality computation. For example, only the total non-zero counts and the total non-zero counts <LCV may be tracked. A similar tracking of history may be performed with the sorted list 218 shown in
In
The test 402 involves determining 404 whether the current hash entry distribution indicates that TP % non-zero count values are <LCV. If this determination 404 is true, then the output 406 of the procedure is low spatial locality. Otherwise, the oldest distribution history entry (or any combination of older histories) is compared 408 to the current hash entry distribution. Using this comparison, three determinations 410-412 are made.
The first determination 410 is whether there been additions causing a net increase in the number of non-zero entries. The second determination 411 is whether those net increases been to count values<LCV. The third determination 412 is whether TP % of the net increases to non-zero counts have been to count values<LCV. If the result of all three determinations 410-412 is “yes,” then the output 418 of the procedure is a trend towards low spatial locality. If the result of any of the three determinations 410-412 is “no,” there the output 416 of the procedure is that there is not low spatial locality, nor a trend toward it.
The CBF arrangement shown in
The bottom row in table 502 represents the sum of counters across each CBF for this key. In this example, all CBFs 0-2 are given the same weight, however in some implementations the most current active CBFs may be given a weight value that is higher than less recent active CBFs. In one variation, if any hash counter value for a given key is zero, then all counts for that CBF are set to zero. This is because a zero in any of the counters in rows 0-2 indicates the key has not been inserted into the CBF. For each key added, the minimum of the summed count values from all active CBFs is defined as the estimated repeat count (ERC) for that key. As keys are added, the ERC over all keys is tracked. In this example, ERC=4 for key 504, because that is the minimum count sum across all CBFs in the table 502 (lowest value of the SUM row in table 502). An alternate procedure for obtaining ERC may involve summing just the minimum value in each row of table 502. This alternate approach would yield ERC=3 for key 504, because the minimum value for each row is one.
The ERC derived as above may be used to determine read hotness. For example, three levels of hotness may be defined: 1. Cold (not present in CBFs); 2. Tepid (present in CBFs, but infrequently requested); and 3. Hot (present in CBFs, and frequently requested). Any combination of absolute and relative metrics may be used to determine this metric. For example, a key may be “cold” when ERC=0, and “hot” when ERC>ERCh, ERCh being a predefined threshold that is tuned for the implementation. The key is “tepid” if 0>ERC≧ERCh. In another example, the key may be considered “hot” when it falls within the top Nth-percentile of non-zero hash entry values up to the maximum ERC. Again, N is tuned for a particular implementation, and the key is “tepid” if above zero but below this percentile.
In reference now to
The criteria determined at 512 is indicative of future read requests of second logical block address associated with the first logical block address. This criteria may be used to cache the second address in a non-volatile (e.g., secondary) cache if the criteria meets a threshold. The criteria may be indicative of a recent proximity in time of the host read operations and/or a closeness of address values of the host read operations. In one arrangement, the non-volatile cache stores read-only data. In such a case, an importance of the criteria is reduced based on write activity associated with the first logical block address.
In another example, the procedure may optionally involve maintaining 514 a plurality of data structures storing indicators. Each of the data structures may store indicators of one or more of A) recent operations affecting LBAs associated with the structures (e.g., zone table as shown in
The various embodiments described above may be implemented using circuitry and/or software modules that interact to provide particular results. One of skill in the computing arts can readily implement such described functionality, either at a modular level or as a whole, using knowledge generally known in the art. For example, the flowcharts illustrated herein may be used to create computer-readable instructions/code for execution by a processor. Such instructions may be stored on a computer-readable medium and transferred to the processor for execution as is known in the art. The structures and procedures shown above are only a representative example of embodiments that can be used to facilitate managing error recovery in data storage devices as described above.
The foregoing description of the example embodiments has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the inventive concepts to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. Any or all features of the disclosed embodiments can be applied individually or in any combination are not meant to be limiting, but purely illustrative. It is intended that the scope be limited not with this detailed description, but rather determined by the claims appended hereto.
Number | Name | Date | Kind |
---|---|---|---|
4899230 | Sherritt | Feb 1990 | A |
5274768 | Traw et al. | Dec 1993 | A |
5420998 | Horning | May 1995 | A |
5644789 | Matthews et al. | Jul 1997 | A |
6339811 | Gaertner et al. | Jan 2002 | B1 |
6549992 | Armangau et al. | Apr 2003 | B1 |
6571318 | Sander | May 2003 | B1 |
6948015 | Ogasawara et al. | Sep 2005 | B2 |
6976147 | Isaac | Dec 2005 | B1 |
7181578 | Guha et al. | Feb 2007 | B1 |
7305526 | Benhase et al. | Dec 2007 | B2 |
7613876 | Bruce et al. | Nov 2009 | B2 |
7769970 | Yeh et al. | Aug 2010 | B1 |
7836259 | Filippo | Nov 2010 | B1 |
7979631 | Ahn et al. | Jul 2011 | B2 |
7996642 | Smith | Aug 2011 | B1 |
8015360 | Hong et al. | Sep 2011 | B2 |
8032700 | Bruce et al. | Oct 2011 | B2 |
8195881 | Bohn et al. | Jun 2012 | B2 |
8341339 | Boyle et al. | Dec 2012 | B1 |
8489820 | Ellard | Jul 2013 | B1 |
8583879 | Na et al. | Nov 2013 | B2 |
20020002655 | Hoskins | Jan 2002 | A1 |
20020176430 | Sangha et al. | Nov 2002 | A1 |
20030105937 | Cooksey et al. | Jun 2003 | A1 |
20030105938 | Cooksey et al. | Jun 2003 | A1 |
20030105940 | Cooksey et al. | Jun 2003 | A1 |
20030196042 | Hopeman et al. | Oct 2003 | A1 |
20030200393 | Cornaby et al. | Oct 2003 | A1 |
20040123043 | Rotithor et al. | Jun 2004 | A1 |
20050108491 | Wong et al. | May 2005 | A1 |
20050114606 | Matick et al. | May 2005 | A1 |
20050172074 | Sinclair | Aug 2005 | A1 |
20060184949 | Craddock et al. | Aug 2006 | A1 |
20060248387 | Nicholson et al. | Nov 2006 | A1 |
20070022241 | Sinclair | Jan 2007 | A1 |
20070136523 | Bonella et al. | Jun 2007 | A1 |
20070250665 | Shimada | Oct 2007 | A1 |
20070288692 | Bruce et al. | Dec 2007 | A1 |
20080059694 | Lee | Mar 2008 | A1 |
20080162849 | Savagaonkar et al. | Jul 2008 | A1 |
20080209131 | Kornegay et al. | Aug 2008 | A1 |
20080288751 | Kocev | Nov 2008 | A1 |
20090055595 | Gill et al. | Feb 2009 | A1 |
20090089501 | Ahn et al. | Apr 2009 | A1 |
20090106481 | Yang et al. | Apr 2009 | A1 |
20090157918 | Jin et al. | Jun 2009 | A1 |
20090193193 | Kern | Jul 2009 | A1 |
20090300628 | Patil et al. | Dec 2009 | A1 |
20100023682 | Lee | Jan 2010 | A1 |
20100095053 | Bruce et al. | Apr 2010 | A1 |
20100115172 | Gillingham et al. | May 2010 | A1 |
20100217952 | Iyer et al. | Aug 2010 | A1 |
20100325352 | Schuette et al. | Dec 2010 | A1 |
20110145489 | Yu et al. | Jun 2011 | A1 |
20120191936 | Ebsen et al. | Jul 2012 | A1 |
20120210041 | Flynn et al. | Aug 2012 | A1 |
20120266175 | Zheng | Oct 2012 | A1 |
20120311237 | Park | Dec 2012 | A1 |
20120317364 | Loh | Dec 2012 | A1 |
20130024625 | Benhase et al. | Jan 2013 | A1 |
20130179486 | Lee et al. | Jul 2013 | A1 |
20130191601 | Peterson et al. | Jul 2013 | A1 |
20130212319 | Hida et al. | Aug 2013 | A1 |
20130246688 | Kanno et al. | Sep 2013 | A1 |
20130268728 | Ramanujan et al. | Oct 2013 | A1 |
20130339617 | Averbouch et al. | Dec 2013 | A1 |
20140013025 | Venkata | Jan 2014 | A1 |
20140013026 | Venkata et al. | Jan 2014 | A1 |
20140013027 | Venkata et al. | Jan 2014 | A1 |
20140013047 | Sawin et al. | Jan 2014 | A1 |
20140013053 | Sawin et al. | Jan 2014 | A1 |
20140207997 | Peterson et al. | Jul 2014 | A1 |
20140241092 | Ha | Aug 2014 | A1 |
20140281134 | Eitan et al. | Sep 2014 | A1 |
20150033226 | Phelan et al. | Jan 2015 | A1 |
20150058525 | Venkata | Feb 2015 | A1 |
20150058526 | Venkata | Feb 2015 | A1 |
20150058527 | Venkata | Feb 2015 | A1 |
20150058683 | Venkata | Feb 2015 | A1 |
Entry |
---|
Ahmadi et al., “A Cache Architecture for Counting Bloom Filters”, 15th IEEE International Conference on Networks, 2007, pp. 218-223. |
File History for U.S. Appl. No. 13/543,123. |
File History for U.S. Appl. No. 13/543,303. |
File History for U.S. Appl. No. 13/543,036. |
Number | Date | Country | |
---|---|---|---|
20140013052 A1 | Jan 2014 | US |