Claims
- 1. A method of managing memory units, comprising assigning a memory unit of an oversize data object to one of two or more memory positions based on a status of at least one first memory parameter that reflects the number of anticipated future requests for access to said memory unit, the elapsed time until receipt of a future request for access to said memory unit, or a combination thereof.
- 2. The method of claim 1, wherein said assigning comprises assigning said memory unit to a first memory position based on a first status of said at least one first memory parameter; and reassigning said memory unit to a second memory position based on a second status of said at least one memory parameter, said first status of said memory parameter being different than said second status of said memory parameter.
- 3. The method of claim 2, wherein said first memory position comprises a position within a first memory queue, and wherein said second memory position comprises a position within a second memory queue.
- 4. The method of claim 2, wherein said first memory position comprises a first position within said buffer memory, and wherein said second memory position comprises a second position within said buffer memory.
- 5. The method of claim 2, wherein said first memory position comprises a position within a first buffer memory queue, and wherein said second memory position comprises a position within a second buffer memory queue.
- 6. The method of claim 1, wherein said assigning is also based on a status of at least one second memory parameter that reflects the number of memory units existing in the data interval between an existing viewer of said memory unit and a succeeding viewer of said memory unit, the difference in data consumption rate between said existing viewer and said succeeding viewer of said memory unit, or a combination thereof.
- 7. The method of claim 2, wherein said reassigning is also based on a status of at least one second memory parameter that reflects the number of memory units existing in the data interval between an existing viewer of said memory unit and a succeeding viewer of said memory unit, the difference in data consumption rate between said existing viewer and said succeeding viewer of said memory unit, or a combination thereof.
- 8. The method of claim 7, wherein said first memory position comprises a position within a buffer memory, and wherein said second memory position comprises a position within a cache memory or a free pool memory.
- 9. The method of claim 8, wherein initiation of said assignment between said first memory position and said second memory position is based on said status of said first parameter; and wherein said relative location of said second memory position within cache memory or free pool memory is based on said status of said second memory parameter.
- 10. The method of claim 9, wherein said first memory position comprises a position within a buffer memory queue, and wherein said second memory position comprises a position within a cache memory queue or a free pool memory queue.
- 11. The method of claim 1, wherein said two or more memory positions comprise at least two positions within a buffer memory; and wherein said at least one first memory parameter comprises a succeeding viewer count (FVC).
- 12. The method of claim 7, wherein said two or more memory positions comprise at least one position within a buffer memory and at least one position in a cache memory, and wherein said at least one first memory parameter comprises an active connection count (ACC), a succeeding viewer count (FVC) or a combination thereof; and wherein said at least one second memory parameter comprises an interval cost (IC).
- 13. The method of claim 9, wherein said two or more memory positions comprise at least one position within a buffer memory, at least one position in a cache memory, and at least position in a free pool memory; and wherein said at least one first memory parameter comprises an active connection count (ACC), a succeeding viewer count (FVC) or a combination thereof; and wherein said at least one second memory parameter comprises an interval cost (IC).
- 14. The method of claim 1, wherein said method further comprises assigning said memory unit to said one of two or more memory positions based at least partially on the status of a flag associated with said memory unit.
- 15. The method of claim 14, wherein said flag represents a priority class associated with said memory unit.
- 16. The method of claim 1, wherein said memory units comprise memory blocks.
- 17. The method of claim 1, wherein said over-size data object comprises continuous media data.
- 18. The method of claim 1, wherein said over-size data object comprises non-continuous data.
- 19. A method of managing memory units within an information delivery environment, comprising assigning a memory unit of an over-size data object to one of a plurality of memory positions based on a status of at least one first memory parameter and a status of at least one second memory parameter; said first memory parameter reflecting the number of anticipated future requests for access to said memory unit, the elapsed time until receipt of a future request for access to said memory unit, or a combination thereof; and said second memory parameter reflecting the number of memory units existing in the data interval between an existing viewer of said memory unit and a succeeding viewer of said memory unit, the difference in data consumption rate between said existing viewer and said succeeding viewer of said memory unit, or a combination thereof.
- 20. The method of claim 19, wherein said plurality of memory positions comprise at least two positions within a buffer memory and at least two positions within a cache memory, each of said two positions in said cache memory corresponding to a respective one of said two positions within said buffer memory.
- 21. The method of claim 20, wherein said buffer memory comprises a plurality of positions, each buffer memory position having a sequential identification value associated with said buffer memory position, and wherein said cache memory comprises a plurality of positions, each cache memory position having a sequential identification value associated with said cache memory position that correlates to a sequential identification value of a corresponding buffer memory position, each of said sequential identification values corresponding to a possible sum of active connection count (ACC) and succeeding viewer count (FVC) or range thereof that may be associated with a memory unit at a given time; and
wherein if said active connection count number is greater than zero, said assigning comprises assigning said memory unit to a first buffer memory position that has a sequential identification value corresponding to the sum of active connection count (ACC) and succeeding viewer count (FVC) associated with said memory unit; and wherein said method further comprises leaving said memory unit in said first buffer memory position until a subsequent change in the sum of active connection count (ACC) and succeeding viewer count (FVC) associated with said memory unit, and reassigning said memory unit as follows upon a subsequent change in active connection count (ACC) or the sum of active connection count (ACC) and succeeding viewer count (FVC) associated with said memory unit:
if said sum of active connection count (ACC) and succeeding viewer count (FVC) increases to a number corresponding to a sequential identification value of a second buffer memory position, then reassigning said memory unit from said first buffer memory position to said second buffer memory position; if said sum of active connection count (ACC) and succeeding viewer count (FVC) increases to a number corresponding to the same sequential identification value of said first buffer memory position, or decreases to a number that is greater than or equal to one, then leaving said memory unit in said first buffer memory position; or if said active connection count (ACC) decreases to zero, then reassigning said memory unit from said first buffer memory position to a first cache memory position that has a sequential identification value that correlates to the sequential identification value of said first buffer memory position, or that has a sequential identification value that correlates to the sequential identification value of a buffer memory position having a lower sequential identification value than said first buffer memory position.
- 22. The method of claim 21, wherein a sequentially higher maximum interval cost (MIC) is associated with each sequentially lower cache memory position; and wherein when said active connection count (ACC) decreases to zero said method further comprises comparing an interval cost (IC) associated with said memory unit with a maximum interval cost (MIC) associated with a cache memory position that has a sequential identification value that correlates to the sequential identification value of said first buffer memory position; and
reassigning said memory unit from said first buffer memory position to a first cache memory position that comprises a cache memory position having a sequential identification value that correlates to the sequential identification value of said first buffer memory position if said interval cost (IC) associated with said memory unit is less than or equal to said maximum interval cost (MIC) associated with said cache memory position; or reassigning said memory unit from said first buffer memory position to a first cache memory position that is the sequentially highest cache memory position having an associated maximum interval cost (MIC) greater than said interval cost (IC) of said memory unit if said interval cost (IC) associated with said memory unit is greater than said maximum interval cost (MIC) of said cache memory position having a sequential identification value that correlates to the sequential identification value of said first buffer memory position.
- 23. The method of claim 22, further comprising reassigning said memory unit from said first cache memory position in a manner as follows:
if said active connection count number (ACC) increases from zero to a number greater than zero, then reassigning said memory unit from said first cache memory position to a buffer memory position that has a sequential identification value corresponding to the sum of active connection count (ACC) and succeeding viewer count (FVC) associated with said memory unit; or if said number of current active connection count (ACC) remains equal to zero, then subsequently reassigning said memory unit to a cache memory position having one lower sequential identification value than the sequential identification value associated with said first cache memory position, or removing said memory unit from said cache memory if said first cache memory position is associated with the lowermost sequential identification value.
- 24. The method of claim 23, wherein said over-size data object comprises continuous media data; and wherein prior to removing said memory unit from said cache memory, said method further comprises using an external storage I/O admission policy to determine if sufficient external storage I/O capacity exists to serve succeeding viewers of said memory unit without interruption; and further comprises maintaining said memory unit in buffer/cache memory if said sufficient external storage I/O capacity does not exist.
- 25. The method of claim 23, wherein said over-size data object comprises continuous media data; and wherein said method further comprises maintaining a threshold size of memory allocated to memory units assigned to said cache memory position associated with the lowermost sequential identification value by identifying and reassigning memory units from other cache memory positions having higher sequential identification values as needed to maintain said threshold memory size.
- 26. The method of claim 22, wherein each buffer memory position and each cache memory position comprises an LRU queue.
- 27. The method of claim 23, wherein each buffer memory position comprises an LRU buffer queue having a flexible size; and wherein the cache memory position having the lowermost sequential identification value comprises an LRU free pool queue having a flexible size; wherein each cache memory position having a sequential identification value greater than the lowermost sequential identification value comprises an LRU cache queue having a fixed size, with the total memory size represented by said LRU buffer queues, said LRU cache queues and said LRU free pool being equal to a total memory size of a buffer/cache memory; and
wherein said reassignment of said memory unit from said first cache memory position to a cache memory position having one lower sequential identification value occurs due to LRU queue displacement to the bottom and out of said respective fixed size LRU cache queue; and wherein said removal of said memory unit from said cache memory position having the lowermost sequential identification value occurs due to LRU queue displacement of said memory unit to the bottom of said LRU free pool queue and subsequent reuse of buffer/cache memory associated with said memory unit at the bottom of said flexible LRU free pool queue for a new memory unit assigned from external storage to a buffer memory position.
- 28. The method of claim 27, wherein said over-size data object comprises continuous media data; and wherein prior to reassigning said memory unit to said LRU free pool queue, said method further comprises using an external storage I/O admission policy to determine if sufficient external storage I/O capacity exists to serve succeeding viewers of said memory unit without interruption; and further comprises maintaining said memory unit in buffer/cache memory if said sufficient external storage I/O capacity does not exist.
- 29. The method of claim 27, wherein said over-size data object comprises continuous media data; and wherein said method farther comprises maintaining a threshold size of memory allocated to memory units assigned to said LRU free pool queue by identifying and reassigning memory units from other LRU cache queues to said LRU free pool queue as needed to maintain said threshold memory size.
- 30. The method of claim 19, wherein said assignment of said memory units is managed and tracked by a processor or group of processors in an integrated manner.
- 31. The method of claim 19, wherein said assignment and reassignment of said memory units is managed using identifier manipulation.
- 32. The method of claim 27, wherein said assignment of said memory units is managed and tracked by a processor or group of processors in an integrated manner.
- 33. The method of claim 27, wherein said assignment and reassignment of said memory units is managed using identifier manipulation.
- 34. The method of claim 20, wherein said method further comprises assigning said memory unit to said one of a plurality of memory positions based at least partially on the status of said flag associated with said memory unit.
- 35. The method of claim 34, wherein said flag represents a priority class associated with said memory unit.
- 36. The method of claim 19, wherein said memory units comprise memory blocks.
- 37. The method of claim 19, wherein said over-size data object comprises continuous media data.
- 38. The method of claim 19, wherein said over-size data object comprises non-continuous data.
- 39. A method of managing memory units using an integrated memory management structure, comprising:
assigning memory units of an over-size data object to one or more positions within a buffer memory defined by said integrated structure; subsequently reassigning said memory units from said buffer memory to one or more positions within a cache memory defined by said structure or to a free pool memory defined by said structure; and subsequently removing said memory units from assignment to a position within said free pool memory; wherein said reassignment of said memory units from said buffer memory to one or more positions within said cache memory is based on the combination of at least one first memory parameter and at least one second memory parameter, wherein said first memory parameter reflects the value of maintaining said memory units within said cache memory in terms of future external storage I/O requests that may be eliminated by maintaining said memory units in said cache memory, and wherein said second memory parameter reflects cost of maintaining said memory units within said cache memory in terms of the size of said memory units and duration of storage associated with maintaining said memory units within said cache memory.
- 40. The method of claim 39, wherein said assignment and reassignment of said memory units is managed and tracked by a processor or group of processors in an integrated manner.
- 41. The method of claim 39, wherein said assignment and reassignment of said memory units is managed using identifier manipulation.
- 42. The method of claim 39, wherein said assignment of said memory units to one or more positions within a buffer memory is based at least in part on a status of at least one first memory parameter that reflects the number of anticipated future requests for access to said memory units.
- 43. The method of claim 39, wherein said subsequent reassignment said memory units from said buffer memory to one or more positions within a cache memory or free pool memory is based at least in part on the number of memory units existing in the data interval between an existing viewer of said memory units and a succeeding viewer of said memory units, the difference in data consumption rate between said existing viewer and said succeeding viewer of said memory units, or a combination thereof.
- 44. The method of claim 39, wherein said subsequent removal of said memory units from assignment to a position within said free pool memory occurs to accommodate assignment of new memory units from external storage to a buffer memory position.
- 45. The method of claim 39, wherein said over-size data object comprises continuous media data: and wherein prior to removing said memory unit from assignment to said free pool memory, said method further comprises using an external storage I/O admission policy to determine if sufficient external storage I/O capacity exists to serve succeeding viewers of said memory unit without interruption; and further comprises maintaining said memory unit in buffer/cache memory if said sufficient external storage I/O capacity does not exist.
- 46. The method of claim 39, further comprising making one or more of the following reassignments of said memory units within said structure prior to removal of said memory units from said free pool:
reassigning said memory units between multiple positions within said buffer memory; or reassigning said memory units from said cache memory or from said free pool memory to one or more positions within said buffer memory; or reassigning said memory units between multiple positions within said cache memory; or reassigning said memory units between said cache memory and said free pool memory; and wherein said reassignments of said memory units is based at least in part on said first and second memory parameters.
- 47. The method of claim 46, wherein said assignment or said reassignment of said memory units to one or more positions within a buffer memory is based at least in part on a status of at least one first memory parameter that reflects the number of anticipated future requests for access to said memory unit;
wherein reassignment of said memory units from said buffer memory to one or more positions within a cache memory or free pool memory is based at least in part on the number of memory units existing in the data interval between an existing viewer of said memory unit and a succeeding viewer of said memory unit, the difference in data consumption rate between said existing viewer and said succeeding viewer of said memory unit, or a combination thereof; and wherein said subsequent removal of said memory units from assignment to a position within said free pool memory occurs to accommodate assignment of a new memory unit from external storage to a buffer memory position.
- 48. The method of claim 47, wherein said over-size data object comprises continuous media data; and wherein prior to removing said memory unit from assignment to said free pool memory, said method further comprises using an external storage I/O admission policy to determine if sufficient external storage I/O capacity exists to serve succeeding viewers of said memory unit without interruption; and further comprises maintaining said memory unit in buffer/cache memory if said sufficient external storage I/O capacity does not exist.
- 49. The method of claim 47, wherein initial assignment of said memory units from external storage to said buffer memory is made based on occurrence of an active connection associated with said memory units; wherein said reassignment of said memory units from said buffer memory to said cache memory or said free pool memory is made on occurrence of decrementation of said active connection count (ACC) to a value less than zero; and wherein said reassignment of said memory units from said cache memory or said free pool memory to said buffer memory is made on occurrence of incrementation of said active connection count (ACC) to a value greater than zero.
- 50. The method of claim 49, wherein said active connection count (ACC) associated with each memory unit is tracked by said processor or group of processors; and wherein said processor or group of processors manages said assignment and reassignment of said memory units in an integrated manner based at least partially thereon.
- 51. The method of claim 49, wherein said buffer memory comprises two or more sequentially ascending buffer memory queues, wherein said free pool memory comprises at least one free pool memory queue corresponding to the lowermost of said sequentially ascending buffer queues, and wherein said cache memory comprises at least one cache memory queue corresponding to another of said buffer memory queues; and wherein said method further compnses:
assigning and reassigning memory units between the queues of said buffer memory based at least in part on the succeeding viewer count (FVC) associated with said memory units; reassigning memory units between said buffer memory and said cache or free pool memories based at least in part on the interval cost (IC) associated with said memory units; assigning and reassigning memory units between the queues of said cache memory and said free pool memory based on the relative frequency of requests for access to a given memory unit; and removing said memory units from said free pool memory based on relative recency of requests for access to a given memory unit and need for additional memory for use by said buffer memory.
- 52. The method of claim 51, wherein said reassignment of said memory units from said buffer memory to said cache memory or free pool memory occurs from a buffer memory queue to a corresponding or sequentially lower cache memory queue or free pool memory queue; wherein said reassignment of said memory units from said cache memory or said free pool memory to said buffer memory occurs from a cache memory queue or free pool memory queue to a corresponding or higher buffer memory queue.
- 53. The method of claim 52, wherein said reassignment of said memory units from said buffer memory to said cache memory or free pool memory occurs from a buffer memory queue to a corresponding or sequentially lower cache memory queue or free pool memory queue that is the sequentially highest cache memory or free pool queue being a having maximum interval cost (MIC) that is greater than or equal to the interval cost (IC) associated with said memory units; and wherein said reassignment of said memory units from said cache memory or said free pool memory to said buffer memory occurs from a cache memory queue or free pool memory queue to a corresponding or higher buffer memory queue.
- 54. The method of claim 52, wherein said reassignment of said memory units between said buffer memory queues occurs from a lower buffer memory queue to a higher sequentially ascending buffer memory queue; wherein reassignment of said memory units between said cache memory queues occurs from a higher sequentially ascending cache memory queue to a lower cache memory queue or free pool memory queue.
- 55. The method of claim 54, wherein each said buffer memory queue, cache memory queue and free pool memory queue comprises an LRU queue; wherein each said cache memory queue has a fixed size; and wherein a reassignment of said memory units from the bottom of a higher sequentially ascending cache LRU memory queue to a lower cache LRU memory queue or free pool LRU memory queue occurs due to assignment of other memory units to the top of said higher sequential ascending cache LRU memory queue.
- 56. The method of claim 55, wherein each said buffer memory queue and said free pool memory queue are flexible in size and share the balance of the memory not used by said cache memory queues; and wherein a removal of said memory units occurs from the bottom of said free pool LRU memory queue to transfer free memory space to one or more of said buffer memory queues to provide sufficient space for assignment of new memory units from external storage to one or more of said buffer memory queues.
- 57. A method of managing memory units memory units of an over-size data object using a multi-dimensional logical memory management structure, comprising:
providing two or more spatially-offset organizational sub-structures, said substructures being spatially offset in symmetric or asymmetric spatial relationship to form said multi-dimensional management structure, each of said sub-structures having one or more memory unit positions defined therein; and assigning and reassigning memory units memory units of an over-size data object between memory unit positions located in different organizational sub-structures, between positions located within the same organizational sub-structure, or a combination thereof; wherein said assigning and reassigning of memory units memory units of an over-size data object within said structure is based on multiple memory state parameters.
- 58. The method of claim 57, wherein said spatially offset organization structures comprise two or more spatially-offset rows, columns, layers, queues, or any combination thereof.
- 59. The method of claim 57, wherein one or more of said spatially-offset organizational substructures are subdivided into two or more positions within the substructure, said positions being organized within the substructure in a sequentially ascending or descending manner.
- 60. The method of claim 57, wherein said assignments and reassignments of a memory unit within said multi-dimensional structure results in mapping a relative positioning of said memory unit that reflects an updated cache value of said memory unit relative to other memory units in said structure in terms of said multiple memory state parameters.
- 61. A method of managing memory units using an integrated two-dimensional logical memory management structure, comprising:
providing a first horizontal buffer memory layer comprising two or more sequentially ascending buffer memory positions; providing a first horizontal cache memory layer comprising one or more sequentially ascending cache memory positions and a lowermost memory position that comprises a free pool memory position, said first horizontal cache memory layer being vertically offset from said first horizontal buffer memory layer; horizontally assigning and reassigning memory units of an over-size data object between said buffer memory positions within said first horizontal buffer memory layer based on at least one first memory parameter; horizontally assigning and reassigning memory units of an over-size data object between said cache memory positions and between said free pool memory position within said first horizontal cache memory layer based on at least one second memory parameter; and vertically assigning and reassigning memory units of an over-size data object between said first horizontal buffer memory layer and said first horizontal cache memory layer based on at least one third memory parameter.
- 62. The method of claim 61, wherein reassignment of a memory unit from a first position to a second position within said structure is based on relative positioning of said first position within said structure and on said first and second parameters; and wherein said relative positioning of said second position within said structure reflects a renewed cache value of said memory units relative to other memory units in the structure in terms of at least two of said first, second and third parameters.
- 63. The method of claim 61, wherein each of said vertical and horizontal assignments and reassignments of a memory unit within said two-dimensional structure results in mapping a relative positioning of said memory unit that reflects an updated cache value of said memory unit relative to other memory units in said structure in terms of at least two of said first, second and third parameters without requiring individual values of said parameters to be explicitly recorded and recalculated.
- 64. The method of claim 61, wherein each of said vertical and horizontal assignments and reassignments of a memory unit within said two-dimensional structure results in mapping a relative positioning of said memory unit that reflects an updated relative cache value of said memory unit relative to other memory units in said structure in terms of at least two of said first, second and third parameters, and that allows removal of memory units having the least relative cache value in terms of at least two of said first, second and third parameters, without requiring individual values of said parameters to be explicitly recalculated and resorted.
- 65. The method of claim 61, wherein said first memory parameter comprises a frequency parameter, wherein said second memory parameter comprises a recency parameter, and wherein said third parameter comprises a connection status parameter.
- 66. The method of claim 65, wherein each said buffer memory position comprises a buffer memory queue; wherein each said cache memory position comprises a cache memory queue; and wherein intra-queue positioning occurs within each buffer memory queue based on a fourth memory parameter; and wherein intra-queue positioning with each cache memory queue and free pool memory queue occurs based on a fifth memory parameter.
- 67. The method of claim 66, wherein said fourth and fifth memory parameters comprise recency parameters.
- 68. The method of claim 67, wherein said each buffer memory queue, cache memory queue and free pool memory queue comprise LRU memory queues.
- 69. The method of claim 68, further comprising:
horizontally assigning and reassigning memory units between said buffer memory queues within said first horizontal buffer memory layer based at least in part on a value parameter that reflects the value of maintaining said memory units within said cache memory in terms of future external storage I/O requests that may be eliminated by maintaining said memory units in said buffer/cache memory; vertically reassigning memory units between said buffer memory queues and said cache or free pool memory queues based at least in part on a recency parameter that reflects the status of active requests for access to a given memory unit, and on a cost parameter that reflects the value of maintaining said memory units within said cache memory in terms of future external storage I/O requests that may be eliminated by maintaining said memory units in said buffer/cache memory, and reflects the cost of maintaining said memory units within said cache memory in terms of the size of said memory units and duration of storage associated with maintaining said memory units within said cache memory; horizontally assigning and reassigning memory units between said cache memory queues and said free pool memory queues based at least in part on the relative recency of requests for access to a given memory unit; and removing said memory units from said free pool memory queue based on relative recency of requests for access to a given memory unit and need for additional memory for use by said buffer memory.
- 70. The method of claim 61, wherein said assignments and reassignments are managed and tracked by a processor or group of processors in an integrated manner.
- 71. The method of claim 61, wherein said assignment and reassignment of said memory units is managed using identifier manipulation.
- 72. The method of claim 61, further comprising:
providing a second horizontal buffer memory layer comprising two or more sequentially ascending buffer memory positions, said second horizontal buffer memory layer being vertically offset from said first horizontal buffer memory layer; or providing a second horizontal cache memory layer comprising two or more sequentially ascending buffer memory positions, said second horizontal buffer memory layer being vertically offset from said first horizontal cache memory layer; horizontally assigning and reassigning memory units between said memory positions within said second horizontal buffer memory layer or said second horizontal cache memory layer based on at least one sixth memory parameter; and vertically assigning and reassigning memory units between said second horizontal buffer memory layer or said second horizontal cache memory layer and said first horizontal buffer memory layer or said first horizontal cache memory layer based on at least one seventh memory parameter.
- 73. An integrated two-dimensional logical memory management structure for use in managing memory units of over-size data objects, comprising:
at least one horizontal buffer memory layer comprising two or more sequentially ascending continuous media data buffer memory positions; and at least one horizontal cache memory layer comprising one or more sequentially ascending over-size data object memory unit cache memory positions and a lowermost memory position that comprises an over-size data object memory unit free pool memory position, said first horizontal cache memory layer being vertically offset from said first horizontal buffer memory layer.
- 74. The memory management structure of claim 73, wherein said each of said sequentially ascending cache memory positions and said free pool memory position uniquely correlates to one of said sequentially ascending buffer memory positions.
- 75. The memory management structure of claim 73, wherein memory units of said over-size data object are operably assignable, reassignable and trackable between each of said buffer memory positions, cache memory positions and said free pool memory position by a processor or group of processors in an integrated manner.
- 76. The memory management structure of claim 75, wherein said memory units are operably placeable within each of said buffer memory positions, cache memory positions or said free pool memory position using identifier manipulation.
- 77. A method for managing over-size data object content in a network environment comprising:
determining the number of active connections and anticipated future connections associated with said over-size data object content used within the network environment; and referencing the content location based on the determined connections and anticipated future connections.
- 78. The method of claim 73, further comprising:
obtaining the content from an external storage device operably coupled to the network environment; referencing the content into an available used memory reference corresponding to the sum of active connections and anticipated future connections.
- 79. The method of claim 77, further comprising:
locating the content in a free memory reference; and referencing the content using an available used memory reference in response to determining the existence of an active connection status.
- 80. The method of claim 79, further comprising determining an interval cost (IC) parameter associated with the content upon referencing the content.
- 81. The method of claim 77, further comprising determining a maximum interval cost (MIC) parameter value operable to reduce a cost of maintaining content in free memory.
- 82. The method of claim 81, further comprising:
determining a closure of all active connections associated with said over-size data object data; comparing the maximum interval cost (MIC) parameter to the interval cost (IC) parameter; and performing an action in response to comparing the maximum interval cost (MIC) parameter to the interval cost (IC) parameter.
- 83. The method of claim 82, further comprising re-referencing the content to a first free memory reference upon determining an interval cost (IC) parameter value that is less than or equal to the maximum interval cost (MIC) parameter value.
- 84. The method of claim 83, further comprising:
re-referencing the content to a second free memory reference upon determining an interval cost (IC) parameter value that is greater than the maximum interval cost (MIC) parameter value associated with the first free memory reference.
- 85. The method as recited in claim 77, further comprising:
detecting a closed connection associated with accessing the content; determining the reference associated with the content; and decrementing a count value associated with the content in response to the closed connection.
- 86. The method of claim 85, further comprising:
determining the count value associated with the content; and re-referencing the content in response to determining count value equal to zero.
- 87. A network processing system operable to process information communicated via a network in an over-size data object environment comprising:
a network processor operable to process network communicated information in said oversize data object environment; and a memory management system operable to reference the information based upon a connection status, number of anticipated future connections, and cache storage cost associated with the information.
- 88. The system of claim 87, wherein the memory management system comprises:
a first used memory reference operable to reference the information in response to determining an active connection status; and a second free memory reference operably associated with the first used memory reference and operable to provide a reference to the content in response to determining the active connection status.
- 89. The system of claim 88, further comprising:
a second used memory reference logically coupled to the first used memory reference and the first free memory reference; and a second free memory reference logically coupled to the second used memory reference and the first free memory reference.
- 90. The system of claim 89, further comprising the second used memory reference operable to reference content referenced by the first used memory reference and the first free memory reference based upon a parameter associated with the content.
- 91. The system of claim 89, further comprising the second free memory reference operable to reference content referenced by the second used memory reference based on a connection status associated with the content.
- 92. The system of claim 89, further comprising the second free memory reference operable to provide a reference to the content to the first free memory reference based upon a parameter association with the content.
- 93. The system of claim 87, further comprising the memory operable to reference content based on a cache value parameter associated with the information.
- 94. The system of claim 87, further comprising the memory operable to reference content based on a cache storage cost parameter associated with one or more memory references.
- 95. A method for managing over-size data object content within a network environment comprising:
determining the number of active connections and anticipated future connections associated with said over-size data object content used within the network environment; referencing the content based on the determined active and anticipated connections; locating the content in a memory; and re-referencing the content using and available free memory reference upon detecting closure of all active connections.
Parent Case Info
[0001] This application claims priority from Provisional Application Serial No. 60/246,359 filed on Nov. 7, 2000 which is entitled “CACHING ALGORITHM FOR MULTIMEDIA SERVERS” and to Provisional Application Serial No. 60/246,445 filed on Nov. 7, 2000 which is entitled “SYSTEMS AND METHODS FOR PROVIDING EFFICIENT USE OF MEMORY FOR NETWORK SYSTEMS,” the disclosures of each being incorporated herein by reference.
Provisional Applications (2)
|
Number |
Date |
Country |
|
60246359 |
Nov 2000 |
US |
|
60246445 |
Nov 2000 |
US |