Claims
- 1. A method of managing memory units using an integrated memory management structure, comprising:
assigning memory units to one or more positions within a buffer memory defined by said integrated structure; subsequently reassigning said memory units from said buffer memory to one or more positions within a cache memory defined by said integrated structure; and subsequently removing said memory units from assignment to a position within said cache memory; wherein said assignment, reassignment and removal of said memory units is based on one or more memory state parameters associated with said memory units.
- 2. The method of claim 1, wherein said cache memory comprises a free pool memory, and wherein said subsequently removing comprises subsequently removing said memory units from assignment to a position within said free pool memory.
- 3. The method of claim 2, wherein said assignment and reassignment of said memory units is managed and tracked by a processor or group of processors in an integrated manner.
- 4. The method of claim 2, wherein said assignment and reassignment of said memory units is managed using identifier manipulation.
- 5. The method of claim 2, further comprising making one or more of the following reassignments of said memory units within said structure prior to removal of said memory units from said free pool:
reassigning said memory units between multiple positions within said buffer memory; or reassigning said memory units from said cache memory or from said free pool memory to one or more positions within said buffer memory; or reassigning said memory units between multiple positions within said cache memory; or reassigning said memory units between said cache memory and said free pool memory; and wherein said reassignments of said memory units is based on said one or more memory state parameters.
- 6. The method of claim 5, wherein said one or more memory state parameters comprise at least one of recency, frequency, popularity, aging time, sitting time (ST), memory unit size, operator assigned keys, or a combination thereof.
- 7. The method of claim 5, wherein assignment to said buffer memory and reassignment to positions within said buffer memory is made based on changes in an active connection count (ACC) that is greater than zero; and wherein said reassignment to positions within said cache memory or said free pool memory is made based on decrement of an active connection count (ACC) to zero.
- 8. The method of claim 5, wherein memory units having an active connection count (ACC) greater than zero are maintained within said buffer memory; and wherein memory units having an active connection count (ACC) equal to zero are maintained within said cache memory or free pool memory, or are removed from said free pool memory.
- 9. The method of claim 8, wherein said active connection count (ACC) associated with each memory unit is tracked by said processor or group of processors; and wherein said processor or group of processors manages said assignment and reassignment of said memory units in an integrated manner based at least partially thereon.
- 10. The method of claim 5, wherein said buffer memory comprises two or more sequentially ascending buffer memory queues, wherein said free pool memory comprises at least one free pool memory queue corresponding to the lowermost of said sequentially ascending buffer queues, and wherein said cache memory comprises at least one cache memory queue corresponding to another of said buffer memory queues; and wherein said method further comprises:
assigning and reassigning memory units between the queues of said buffer memory based on the relative frequency of requests for access to a given memory unit; reassigning memory units between said buffer memory and said cache or free pool memories based on relative recency of requests for access to a given memory unit; assigning and reassigning memory units between the queues of said cache memory and said free pool memory based on the relative frequency of requests for access to a given memory unit; and removing assignment of said memory units from said free pool memory based on relative recency of requests for access to a given memory unit and need for additional memory for use by said buffer memory.
- 11. The method of claim 10, wherein said reassignment of said memory units from said buffer memory to said cache memory or free pool memory occurs from a buffer memory queue to a corresponding cache memory queue or free pool memory queue; wherein said reassignment of said memory units from said cache memory or said free pool memory to said buffer memory occurs from a cache memory queue or free pool memory queue to a corresponding or higher sequentially ascending buffer memory queue.
- 12. The method of claim 11, wherein said reassignment of said memory units between said buffer memory queues occurs from a lower buffer memory queue to a higher sequentially ascending buffer memory queue; wherein reassignment of said memory units between said cache memory queues occurs from a higher sequentially ascending cache memory queue to a lower cache memory queue or free pool memory queue.
- 13. The method of claim 12, wherein each said buffer memory queue, cache memory queue and free pool memory queue comprises an LRU queue; wherein each said cache memory queue has a fixed size; and wherein a reassignment of said memory units from the bottom of a higher sequentially ascending cache LRU memory queue to a lower cache LRU memory queue or free pool LRU memory queue occurs due to assignment of other memory units to the top of said higher sequential ascending cache LRU memory queue.
- 14. The method of claim 13, wherein each said buffer memory queue and said free pool memory queue are flexible in size; wherein said buffer memory queues and said free pool memory queue share the balance of the memory not used by said fixed size cache memory queues; and wherein a removal of said memory units occurs from the bottom of said free pool LRU memory queue to transfer free memory space to one or more of said buffer memory queues to provide sufficient space for assignment of new memory units to one or more of said buffer memory queues.
- 15. A method of managing memory units using a multi-dimensional logical memory management structure, comprising:
providing two or more spatially-offset organizational sub-structures, said substructures being spatially offset in symmetric or asymmetric spatial relationship to form said multi-dimensional management structure, each of said sub-structures having one or more memory unit positions defined therein; and assigning and reassigning memory units between memory unit positions located in different organizational sub-structures, between positions located within the same organizational sub-structure, or a combination thereof; wherein said assigning and reassigning of memory units within said structure is based on multiple memory state parameters.
- 16. The method of claim 15, wherein said spatially offset organization structures comprise two or more spatially-offset rows, columns, layers, queues, or any combination thereof.
- 17. The method of claim 15, wherein one or more of said spatially-offset organizational substructures are subdivided into two or more positions within the substructure, said positions being organized within the substructure in a sequentially ascending or descending manner.
- 18. The method of claim 15, wherein said assignments and reassignments of a memory unit within said multi-dimensional structure results in mapping a relative positioning of said memory unit that reflects an updated cache value of said memory unit relative to other memory units in said structure in terms of said multiple memory state parameters.
- 19. A method of managing memory units using an integrated two-dimensional logical memory management structure, comprising:
providing a first horizontal buffer memory layer comprising two or more sequentially ascending buffer memory positions; providing a first horizontal cache memory layer comprising two or more sequentially ascending cache memory positions, said first horizontal cache memory layer being vertically offset from said first horizontal buffer memory layer; horizontally assigning and reassigning memory units between said buffer memory positions within said first horizontal buffer memory layer based on at least one first memory state parameter; horizontally assigning and reassigning memory units between said cache memory positions within said first horizontal cache memory layer based on at least one second memory state parameter; and vertically assigning and reassigning memory units between said first horizontal buffer memory layer and said first horizontal cache memory layer based on at least one third memory state parameter.
- 20. The method of claim 19, wherein a lowermost memory position of the sequentially ascending cache memory positions of said horizontal cache memory layer comprises a free pool memory position; and further comprising removing said memory units from said free pool memory based on at least said second parameter and a need for additional memory for use by said buffer memory.
- 21. The method of claim 19, wherein reassignment of a memory unit from a first position to a second position within said structure is based on relative positioning of said first position within said structure and on said first and second parameters; and wherein said relative positioning of said second position within said structure reflects a renewed cache value of said memory units relative to other memory units in the structure in terms of at least two of said first, second and third parameters.
- 22. The method of claim 19, wherein each of said vertical and horizontal assignments and reassignments of a memory unit within said two-dimensional structure results in mapping a relative positioning of said memory unit that reflects an updated cache value of said memory unit relative to other memory units in said structure in terms of at least two of said first, second and third parameters without requiring individual values of said parameters to be explicitly recorded and recalculated.
- 23. The method of claim 20, wherein each of said vertical and horizontal assignments and reassignments of a memory unit within said two-dimensional structure results in mapping a relative positioning of said memory unit that reflects an updated relative cache value of said memory unit relative to other memory units in said structure in terms of at least two of said first, second and third parameters, and that allows removal of memory units having the least relative cache value in terms of at least two of said first, second and third parameters, without requiring individual values of said parameters to be explicitly recalculated and resorted.
- 24. The method of claim 20, wherein said first memory state parameter comprises a frequency parameter, wherein said second memory state parameter comprises a recency parameter, and wherein said third parameter comprises a connection status parameter.
- 25. The method of claim 24, wherein each said buffer memory position comprises a buffer memory queue; wherein each said cache memory position comprises a cache memory queue; and wherein intra-queue positioning occurs within each buffer memory queue based on a fourth memory state parameter; and wherein intra-queue positioning with each cache memory queue and free pool memory queue occurs based on a fifth memory state parameter.
- 26. The method of claim 25, wherein said fourth and fifth memory state parameters comprise recency parameters.
- 27. The method of claim 26, wherein said each buffer memory queue, cache memory queue and free pool memory queue comprise LRU memory queues.
- 28. The method of claim 26, further comprising:
horizontally assigning and reassigning memory units between said buffer memory queues within said first horizontal buffer memory layer based on the relative frequency of requests for access to a given memory unit; vertically reassigning memory units between said buffer memory queues and said cache or free pool memory queues based on status of active requests for access to a given memory unit; horizontally assigning and reassigning memory units between said cache memory queues and said free pool memory queues based on the relative recency of requests for access to a given memory unit; and removing said memory units from said free pool memory queue based on relative recency of requests for access to a given memory unit and need for additional memory for use by said buffer memory.
- 29. The method of claim 28, wherein said first parameter comprises a relative value of an active connection count (ACC) greater than zero that is associated with said memory units; and wherein said third memory state parameter comprises absence or presence of an active connection associated with said memory units.
- 30. The method of claim 20, wherein said assignments and reassignments are managed and tracked by a processor or group of processors in an integrated manner.
- 31. The method of claim 20, wherein said assignment and reassignment of said memory units is managed using identifier manipulation.
- 32. The method of claim 20, further comprising:
providing a second horizontal buffer memory layer comprising two or more sequentially ascending buffer memory positions, said second horizontal buffer memory layer being vertically offset from said first horizontal buffer memory layer; or providing a second horizontal cache memory layer comprising two or more sequentially ascending buffer memory positions, said second horizontal buffer memory layer being vertically offset from said first horizontal cache memory layer; horizontally assigning and reassigning memory units between said memory positions within said second horizontal buffer memory layer or said second horizontal cache memory layer based on at least one sixth memory state parameter; and vertically assigning and reassigning memory units between said second horizontal buffer memory layer or said second horizontal cache memory layer and said first horizontal buffer memory layer or said first horizontal cache memory layer based on at least one seventh memory state parameter.
- 33. An integrated two-dimensional logical memory management structure, comprising:
at least one horizontal buffer memory layer comprising two or more sequentially ascending buffer memory positions; and at least one horizontal cache memory layer comprising one or more sequentially ascending cache memory positions and a lowermost memory position that comprises a free pool memory position, said first horizontal cache memory layer being vertically offset from said first horizontal buffer memory layer.
- 34. The memory management structure of claim 33, wherein said each of said sequentially ascending cache memory positions and said free pool memory position uniquely correlates to one of said sequentially ascending buffer memory positions.
- 35. The memory management structure of claim 33, wherein memory units are operably assignable, reassignable and trackable between each of said buffer memory positions, cache memory positions and said free pool memory position by a processor or group of processors in an integrated manner.
- 36. The memory management structure of claim 35, wherein memory units are operably placeable within each of said buffer memory positions, cache memory positions or said free pool memory position using identifier manipulation.
- 37. A method of managing memory units, comprising:
assigning a memory unit to one of two or more memory positions based on a status of at least one memory state parameter; wherein said two or more memory positions comprise at least two positions within a buffer memory; and wherein said at least one memory state parameter comprises an active connection count (ACC).
- 38. The method of claim 37, wherein said two or more memory positions further comprise at least two positions within a cache memory, each of said two positions in said cache memory corresponding to a respective one of said two positions within said buffer memory.
- 39. The method of claim 37, wherein said assigning comprises assigning said memory unit to a first memory position based on a first status of said at least one memory state parameter; and reassigning said memory unit to a second memory position based on a second status of said at least one memory state parameter, said first status of said memory state parameter being different than said second status of said memory state parameter.
- 40. The method of claim 37, wherein said first memory position comprises a position within a first memory queue, and wherein said second memory position comprises a position within a second memory queue.
- 41. The method of claim 37 wherein said first memory position comprises a first position within said buffer memory, and wherein said second memory position comprises a second position within said buffer memory.
- 42. The method of claim 37 wherein said first memory position comprises a position within a first buffer memory queue, and wherein said second memory position comprises a position within a second buffer memory queue.
- 43. The method of claim 37, wherein said first memory position comprises a position within a buffer memory, and wherein said second memory position comprises a position within a cache memory or a free pool memory.
- 44. The method of claim 37, wherein said first memory position comprises a position within a buffer memory queue, and wherein said second memory position comprises a position within a cache memory queue or a free pool memory queue.
- 45. The method of claim 38, wherein said status of said memory state parameter comprises an active connection count (ACC) number associated with said memory unit; and wherein said buffer memory comprises a plurality of positions, each buffer memory position having a sequential identification value associated with said buffer memory position, and wherein said cache memory comprises a plurality of positions, each cache memory position having a sequential identification value associated with said cache memory position that correlates to a sequential identification value of a corresponding buffer memory position, each of said sequential identification values corresponding to a possible active connection count (ACC) number or range of possible active connection count (ACC) numbers that may be associated with a memory unit at a given time; and
wherein if said active connection count (ACC) number is greater than zero, said assigning comprises assigning said memory unit to a first buffer memory position that has a sequential identification value corresponding to the active connection count (ACC) number associated with said memory unit; and wherein said method further comprises leaving said memory unit in said first buffer memory position until a subsequent change in the active connection count (ACC) number associated with said memory unit, and reassigning said memory unit as follows upon a subsequent change in the active connection count (ACC) number associated with said memory unit: if said active connection count (ACC) number increases to a number corresponding to a sequential identification value of a second buffer memory position, then reassigning said memory unit from said first buffer memory position to said second buffer memory position; if said active connection count (ACC) number increases to a number corresponding to the same sequential identification value of said first buffer memory position, or decreases to a number that is greater than or equal to one, then leaving said memory unit in said first buffer memory position; or if said number of active connection count (ACC) number decreases to zero, then reassigning said memory unit from said first buffer memory position to a first cache memory position that has a sequential identification number that correlates to the sequential identification number of said first buffer memory position.
- 46. The method of claim 45, further comprising reassigning said memory unit from said first cache memory position in a manner as follows:
if said active connection count (ACC) number increases from zero to a number greater than zero, then reassigning said memory unit from said first cache memory position to a buffer memory position that has one higher sequential identification value than the sequential identification value associated with said first cache memory position, or to a buffer memory position that has the highest sequential identification number if said first cache memory position is associated with the highest sequential identification number; or if said number of current active connections remains equal to zero, then subsequently reassigning said memory unit to a cache memory position having one lower sequential identification value than the sequential identification value associated with said first cache memory position, or removing said memory unit from said cache memory if said first cache memory position is associated with the lowermost sequential identification number.
- 47. The method of claim 41, further comprising determining a sitting time (ST) value associated with the time that said memory unit has resided within said first buffer memory position and comparing said sitting time (ST) value with a resistance barrier time (RBT) value prior to reassigning said memory unit from said first buffer memory position to said second buffer memory position; and leaving said memory unit within said first buffer memory position based on said comparison of said sitting time (ST) value with said resistance barrier time (RBT) value.
- 48. The method of claim 43, further comprising determining a file size value associated with said memory unit and comparing said file size value with a file size threshold value prior to reassigning said memory unit from said buffer memory position to a cache memory position; and assigning said memory unit to said free pool memory position rather than a cache memory position based on said comparison of said file size value and said file threshold value.
- 49. The method of claim 45, wherein each buffer memory position and each cache memory position comprises an LRU queue.
- 50. The method of claim 46, wherein each buffer memory position comprises an LRU buffer queue having a flexible size; and wherein the cache memory position having the lowermost sequential identification value comprises an LRU free pool queue having a flexible size; wherein each cache memory position having a sequential identification value greater than the lowermost sequential identification number comprises an LRU cache queue having a fixed size, with the total memory size represented by said LRU buffer queues, said LRU cache queues and said LRU free pool being equal to a total memory size of a buffer/cache memory; and
wherein said reassignment of said memory unit from said first cache memory position to a cache memory position having one lower sequential identification value occurs due to LRU queue displacement to the bottom and out of said respective fixed size LRU cache queue; and wherein said removal of said memory unit from said cache memory position having the lowermost sequential identification number occurs due to LRU queue displacement of said memory unit to the bottom of said LRU free pool queue and subsequent reuse of buffer/cache memory associated with said memory unit at the bottom of said flexible LRU free pool queue for a new memory unit assigned from external storage to a buffer memory position.
- 51. The method of claim 37, wherein said assignment and reassignment of said memory units is managed and tracked by a processor or group of processors in an integrated manner.
- 52. The method of claim 37, wherein said assignment and reassignment of said memory units is managed using identifier manipulation.
- 53. The method of claim 37, wherein said method further comprises assigning said memory unit to said one of two or more memory positions based at least partially on the status of a flag associated with said memory unit.
- 54. The method of claim 53, wherein said flag represents a priority class associated with said memory unit.
- 55. The method of claim 37, wherein said memory units comprise memory blocks.
- 56. A method for managing content in a network environment comprising:
determining the number of active connections associated with content used within the network environment; and
referencing the content location based on the determined connections.
- 57. The method of claim 56, further comprising:
obtaining the content from an external storage device operably coupled to the network environment; referencing the content into an available used memory reference; incrementing a count parameter associated with the content upon determining an active connection status; and updating a time parameter associated with the content upon referencing the content.
- 58. The method of claim 56, further comprising:
locating the content in a free memory reference; referencing the content using an available used memory reference in response to determining the active connection status; and incrementing a count parameter associated with the content upon determining the active connection status.
- 59. The method of claim 58 further comprising updating a time parameter associated with the content upon referencing the content.
- 60. The method of claim 56, further comprising:
receiving a request for the content; and updating a count parameter in response to the request.
- 61. The method of claim 56, further comprising updating a time parameter associated with the content upon referencing the content.
- 62. The method of claim 61, further comprising determining a resistance barrier timer parameter value operable to reduce re-referencing of the content.
- 63. The method of claim 62, further comprising:
comparing the resistance barrier timer parameter to the time parameter; determining a second reference; and performing an action in response to comparing the resistance timer parameter value to the time parameter value.
- 64. The method of claim 63, further comprising maintaining the reference to the content upon determining a timer parameter value that is less than the resistance barrier timer value.
- 65. The method of claim 56, further comprising:
detecting an active connection for referencing the content; determining the reference of the content; comparing a timer value to a resistance timer barrier value; and processing the content in response to the comparison.
- 66. The method of claim 65, further comprising:
maintaining the reference if the timer value is less than the resistance timer barrier value; and incrementing a counter in response to detecting an active connection.
- 67. The method of claim 65, further comprising:
re-referencing the content to a second reference; incrementing a counter associated with the content; and updating the time parameter associated with the content.
- 68. The method of claim 65, further comprising:
maintaining the content using the reference upon determining the reference is associated with a used cache memory; and incrementing a counter associated with the content.
- 69. The method of claim 65, further comprising:
re-referencing the content to a used memory upon detecting a time parameter value less than a resistance time barrier value; setting a counter to a value of one; and updating the time parameter upon re-referencing the content.
- 70. The method of claim 65, further comprising;
re-referencing the content to a used memory of a second used memory upon determining a time parameter value greater than or equal to the resistance barrier time value; setting a counter to a value of one; and updating the time parameter upon re-referencing the content.
- 71. The method as recited in claim 56, further comprising:
detecting a closed connection associated with accessing the content; determining the reference associated with the content; and decrementing a count value associated with the content in response to the closed connection.
- 72. The method of claim 71, further comprising:
determining the count value associated with the content; re-referencing the content in response to determining count value equal to zero; and updating a time parameter upon re-referencing the content.
- 73. A network processing system operable to process information communicated via a network environment comprising:
a network processor operable to process network communicated information; and a memory management system operable to reference the information based upon a connection status associated with the information.
- 74. The system of claim 73, wherein the memory management system comprises:
a first used memory reference operable to reference the information in response to determining an active connection status; and
a second free memory reference operably associated with the first used memory reference and operable to provide a reference to the content in response to determining the active connection status.
- 75. The system of claim 74, further comprising:
a used memory reference coupled to the first used memory reference and the first free memory reference; and a second free memory reference coupled to the second used memory reference and the first free memory reference.
- 76. The system of claim 75, further comprising the second used memory reference operable to reference content referenced by the first used memory reference and the first free memory reference based upon a parameter associated with the content.
- 77. The system of claim 75, further comprising the second free memory reference operable to reference content referenced by the second used memory reference based on a connection status associated with the content.
- 78. The system of claim 75, further comprising the second free memory reference operable to provide a reference to the content to the first free memory reference based upon a parameter association with the content.
- 79. The system of claim 73, further comprising the memory operable to reference content based on a time parameter associated with the information.
- 80. The system of claim 73, further comprising the memory operable to reference content based on a resistance time barrier value associated with one or more memory references.
- 81. A method for managing content within a network environment comprising:
determining the number of active connections associated with content used within the network environment; referencing the content based on the determined connections; locating the content in a memory; referencing the content using an available free memory reference; incrementing an active count parameter associated with the content upon detecting the new connection.
Parent Case Info
[0001] This application claims priority from Provisional Application Serial No. 60/246,445 filed on Nov. 7, 2000 which is entitled “SYSTEMS AND METHODS FOR PROVIDING EFFICIENT USE OF MEMORY FOR NETWORK SYSTEMS” and to Provisional Application Serial No. 60/246,359 filed on Nov. 7, 2000 which is entitled “CACHING ALGORITHM FOR MULTIMEDIA SERVERS” the disclosures of each being incorporated herein by reference.
Provisional Applications (2)
|
Number |
Date |
Country |
|
60246445 |
Nov 2000 |
US |
|
60246359 |
Nov 2000 |
US |