Claims
- 1. A method for controlling delivery of requested content by a system having resources capable of delivering the content, the method comprising:
receiving a request for content; polling the resources required to process the request for content to determine whether the resources are available to process the request for content; and reserving the resources available to process the request for content.
- 2. The method of claim 1 further comprising evaluating the request for content to identify the resources required to process the request for content.
- 3. The method of claim 1 further comprising:
compiling responses received from the polled resources which indicate availability of the resources to process the request; and evaluating the responses to determine whether the request for content can be processed.
- 4. The method of claim 1 further comprising queuing the request for content for processing by the reserved resources.
- 5. The method of claim 1 further comprising:
communicating with at least one monitoring agent operably coupled to the resources, the monitoring agent operable to determine a current workload of the resources; evaluating the current workload of the resources to determine whether the required resources are available to process the request for content; and generating a response indicative of the availability of the required resources based on the current workload.
- 6. The method of claim 1 further comprising evaluating one or more handling policies to determine the disposition of the request for content.
- 7. The method of claim 6 wherein disposition comprises queuing the request for content for processing.
- 8. The method of claim 6 wherein disposition comprises transferring the request for content to another system for processing.
- 9. The method of claim 6 wherein disposition comprises rejecting the request for content.
- 10. The method of claim 6 wherein disposition comprises rejecting the request for content if the request for content cannot be processed within a specified period of time.
- 11. The method of claim 1 further comprising:
polling at least one shared resource required to process the request for content; and reserving any available shared resource required to processing the request for content.
- 12. A deterministic delivery system comprising:
a plurality of subsystems, each subsystem including at least one resource operable to process a portion of a request; a system monitor operably coupled to the plurality of subsystems; and the system monitor operable to receive a request to be processed, to poll at least one of the plurality of subsystems to determine whether resources required to process the request are available and to reserve the available resources required to process the request.
- 13. The system of claim 12 further comprising the system monitor operable to identify the resources required to process the request.
- 14. The system of claim 12 further comprising:
a monitoring agent operably coupled to each of the plurality of subsystems; and the monitoring agents operable to determine whether the at least one resource operable to process the request is available.
- 15. The system of claim 14 further comprising:
one or more shared resources operably coupled to the plurality of subsystems; and at least one monitoring agent operable to poll the one or more shared resources shared to determine whether the shared resources are available to process the request and to reserve the available shared resources.
- 16. The system of claim 15 further comprising the monitoring agent operable to reserve the at least one resource operable to process the request.
- 17. The system of claim 12 further comprising:
a communications path operably coupling the system monitor and one or more of the plurality of subsystems; and a monitoring agent operably coupled to the communications path, the monitoring agent operable to evaluate a workload of the communications path and to reserve at least a portion of the communications path for processing the request.
- 18. The system of claim 12 further comprising:
a data movement path operably coupled to the plurality of subsystems; and the data movement path operable to move data associated with the request between one or more of the plurality of subsystems.
- 19. The system of claim 12 further comprising the system monitor operable to evaluate one or more handling policies to determine proper disposition of the request.
- 20. The system of claim 19 further comprising at least one of the one or more system policies operable to instruct the system monitor to reject the request for content when the request for content cannot be processed within a specified time period.
- 21. The system of claim 12 further comprising the plurality of subsystems including data center components.
- 22. The system of claim 12 further comprising the plurality of subsystems including components of a computing device.
- 23. A system for processing requests for content comprising:
a communications path; a plurality of subsystems operably coupled to the communications path, each of the plurality of subsystems having one or more resources operable to process at least a portion of a request for content; a monitoring agent operably coupled to each of the plurality of subsystems, each monitoring agent operable to monitor the one or more resources of each subsystem and to reserve at least a portion of the resources of each subsystem; and a system monitor operably coupled to the communications path, the system monitor operable to receive the request for content, to identify the resources required to process the request for content, to poll the monitoring agents of the subsystems having the resources required to process the request, to determine whether the resources required are available to process the request for content and to direct the monitoring agents operably coupled to the resources required to process the request for content to reserve the available resources required to process the request for content.
- 24. The system of claim 23 further comprising:
a data movement path operably coupled to the system monitor and the plurality of subsystems; and the data movement path operable to move data associated with the request for content between at least a portion of the plurality of subsystems.
- 25. The system of claim 23 further comprising one or more shared resources operably coupled to one or more of the plurality of subsystems.
- 26. The system of claim 25 further comprising:
at least one monitoring agent of the one or more subsystems operably coupled to the one or more shared resources; and the monitoring agent operable to determine whether the one or more shared resources are available to process the request for content and to reserve at least a portion of the one or more shared resources available to process the request for content.
- 27. The system of claim 23 further comprising the monitoring agents operable to determine a current workload of the resources required to process the request for content and to notify the system monitor of the availability of the resources required to process the request for content based on the current workload of the resources required.
- 28. The system of claim 23 further comprising the system monitor operable to evaluate one or more handling policies to determine disposition of the request for content.
- 29. The system of claim 28 further comprising at least one of the one or more handling policies operable to direct the system monitor to dispose of the request for content when the request for content cannot be processed within a specified period of time.
- 30. A method for processing a request for content comprising:
receiving a request for content; identifying one or more subsystems having resources required to process the request for content; polling the one or more subsystems to determine whether the resources required are available to process the request for content; evaluating responses received from the one or more subsystems based on availability of the resources required to process the request; and disposing the request for content based on the evaluation of the responses.
- 31. The method of claim 30 further comprising:
reserving at least a portion of the available resources required to process the request for content; and queuing the request for content for processing by the reserved resources.
- 32. The method of claim 30 further comprising queuing the request for content for reevaluation of resource availability.
- 33. The method of claim 30 further comprising transferring the request for content to a system having resources available to process the request for content available.
- 34. The method of claim 30 further comprising rejecting the request for content.
- 35. The method of claim 30 further comprising:
determining availability of the resources required to process the request for content; and generating a response indicative of the availability of the resources required.
- 36. The method of claim 30 further comprising:
polling one or more resources shared by the one or more subsystems to determine whether the shared resources are available to process the request for content; and reserving the shared resources available to process the request.
- 37. The method of claim 30 wherein the request for content further comprises a request for data.
- 38. The method of claim 30 wherein the request for content further comprises a request for services.
- 39. The method of claim 30 further comprising rejecting the request for content when the request for content cannot be processed within a specified period of time.
- 40. A deterministic delivery system comprising:
a system monitor; a plurality of subsystems operably coupled to the system monitor, each subsystem including at least one resource operable to process a portion of a request; the plurality of subsystems operable to generate and transmit a notification to the system monitor indicative of whether the at least one resource of each subsystem is available take on additional processing; and the system monitor operable to accumulate the notifications received from the subsystems, to receive a request to be processed, to evaluate the notifications received from the plurality of subsystems to determine whether resources required to process the request are available and to reserve the available resources required to process the request.
- 41. The system of claim 40 further comprising the system monitor operable to reject the request if the notifications received from the plurality of subsystems indicate that the resources required to process the request are unavailable.
- 42. The system of claim 40 further comprising the system monitor operable to accept the request if the notifications received from the plurality of subsystems indicate that the resources required to process the request are available.
- 43. The system of claim 40 further comprising the system monitor operable to identify the resources required to process the request.
- 44. The system of claim 40 further comprising:
a monitoring agent operably coupled to each of the plurality of subsystems; and the monitoring agents operable to determine whether the at least one resource operable to process the request is available and to generate and transmit the notification indicative of whether the at least one resource is available.
- 45. The system of claim 44 further comprising:
one or more shared resources operably coupled to the plurality of subsystems; and at least one monitoring agent operable to determine whether the shared resources are available to process the request and to reserve the available shared resources.
- 46. The system of claim 45 further comprising the monitoring agent operable to reserve the at least one shared resource operable to process the request.
- 47. The system of claim 40 further comprising:
a communications path operably coupling the system monitor and one or more of the plurality of subsystems; and a monitoring agent operably coupled to the communications path, the monitoring agent operable to evaluate a workload of the communications path and to reserve at least a portion of the communications path for processing the request.
- 48. The system of claim 40 further comprising:
a data movement path operably coupled to the plurality of subsystems; and the data movement path operable to move data associated with the request between one or more of the plurality of subsystems.
- 49. A method for controlling delivery of requested content by a system having resources capable of delivering the content, the method comprising:
receiving a request for content; compiling notifications received from the resources which indicate availability of the resources to process the request; and reserving the resources available to process the request for content.
- 50. The method of claim 49 further comprising evaluating the request for content to identify the resources required to process the request for content.
- 51. The method of claim 49 further comprising evaluating the notifications to determine whether the request for content can be processed.
- 52. The method of claim 49 further comprising queuing the request for content for processing by the reserved resources.
- 53. The method of claim 49 further comprising:
communicating with at least one monitoring agent operably coupled to the resources, the monitoring agent operable to determine a current workload of the resources; evaluating the current workload of the resources to determine whether the required resources are available to process the request for content; and generating a notification indicative of the availability of the required resources based on the current workload.
- 54. The method of claim 49 further comprising evaluating one or more handling policies to determine the disposition of the request for content.
- 55. The method of claim 54 wherein disposition comprises queuing the request for content for processing.
- 56. The method of claim 54 wherein disposition comprises transferring the request for content to another system for processing.
- 57. The method of claim 54 wherein disposition comprises rejecting the request for content.
- 58. The method of claim 49 further comprising:
polling at least one shared resource required to process the request for content; and reserving any available shared resource required to processing the request for content.
- 59. A network connectable information management system, comprising:
a plurality of processing engines, said processing engines adapted to manipulate information; and a system monitor in communication with said plurality of processing engines; wherein said information management system is connected to a network; and wherein said system monitor is configured to monitor a status of a parameter associated with at least one of said processing engines, and to manage manipulation of information in a deterministic manner based at least in part on a status of said parameter.
- 60. The system of claim 59, wherein said system comprises a network endpoint system; wherein said system monitor is configured to manage delivery of information to said network in a deterministic manner; and wherein one of said processing engines comprises a network processor.
- 61. The system of claim 60, wherein said network endpoint system is a content delivery system.
- 62. The system of claim 59, wherein said system comprises an intermediate node system; wherein said system monitor is configured to manage delivery of information to said network in a deterministic manner; and wherein one of said processing engines comprises a network processor.
- 63. The system of claim 62, wherein said intermediate node system is a network switch or network router.
- 64. The system of claim 59, wherein said plurality of processing engines and said system monitor communicate as peers in a peer to peer environment.
- 65. The system of claim 59, further comprising a distributed interconnect coupled to each of said processing engines and said system monitor.
- 66. The system of claim 65, wherein said distributed interconnect comprises a switch fabric.
- 67. The system of claim 59, wherein each of said plurality of processing engines is assigned separate information manipulation tasks in an asymmetrical multi-processor configuration.
- 68. The system of claim 67, wherein said plurality of processing engines include a processing engine that couples said system to said network.
- 69. The system of claim 67, wherein said plurality of processing engines comprise a network interface engine, a storage processing engine and an application processing engine.
- 70. The system of claim 69, wherein said plurality of processing engines further comprise a system management engine.
- 71. The system of claim 69, wherein said system monitor comprises a system management engine.
- 72. A network connectable information management system, comprising:
a plurality of processing engines, said processing engines adapted to manipulate information; and a system monitor in communication with said plurality of processing engines; wherein said plurality of processing engines and said system monitor communicate as peers in a peer to peer environment; wherein said information management system is adapted to deliver information to a network; and wherein said system monitor is configured to monitor a status of a parameter associated with at least one of said processing engines, and to manage delivery of information to said network in a deterministic manner based at least in part on a status of said parameter.
- 73. The system of claim 72, wherein said system monitor is adapted to monitor a status of a parameter associated with one or more of said individual processing engines, said status of each processing engine comprising current or future availability of resources for performing an information manipulation task by said processing engine.
- 74. The system of claim 73, one or more of said processing engines comprises a subsystem module that includes a monitoring agent and said resources for performing an information manipulation task by said processing engine; said monitoring agent being adapted to monitor availability of said resources and to communicate said availability to said system monitor.
- 75. The system of claim 72, wherein said system monitor is adapted to manage delivery of said information in a deterministic manner by rejecting a request for information delivery to said network, by transferring a request for information delivery to another information management system connected to said system in a cluster, by re-queuing a request for information delivery for later reconsideration by said system, or a combination thereof.
- 76. The system of claim 72, wherein said system monitor is adapted to manage delivery of said information in a deterministic manner by selecting one or more of said processing engines to perform manipulation of information as required to effect said delivery of information.
- 77. The system of claim 76, wherein at least one of said processing engines is assignable to perform multiple information manipulation tasks; and wherein said system monitor is adapted to manage delivery of said information in a deterministic manner by assigning an information manipulation task to said processing engine as required to effect said delivery of information.
- 78. The system of claim 72, wherein said system monitor is adapted to manage delivery of said information in a deterministic manner by selecting one or more processing engines of another information management system connected to said system in a cluster to perform manipulation of information as required to effect said delivery of information.
- 79. The system of claim 72, wherein said system monitor is adapted to manage delivery of said information in a deterministic manner based at least in part on a parameter associated with a request for delivery of said information to said network.
- 80. The system of claim 79, wherein said parameter associated with a request comprises priority information associated with said request.
- 81. The system of claim 72, wherein said deterministic management enables accelerated system performance.
- 82. The system of claim 72, wherein said information comprises continuous content.
- 83. A method of managing information in a network connectable information management system, comprising:
monitoring a status of a parameter associated with at least one of a plurality of processing engines adapted to manipulate information in an information management system, said information management system being connected to a network; and managing manipulation of information in a deterministic manner based at least in part on a status of said parameter.
- 84. The method of claim 83, wherein said information management system comprises a network endpoint system, and wherein one of said processing engines comprises a network processor.
- 85. The method of claim 84, wherein said network endpoint system is a content delivery system.
- 86. The method of claim 83, wherein said system comprises an intermediate node system, and wherein one of said processing engines comprises a network processor.
- 87. The method of claim 86, wherein said intermediate node system is a network switch or network router.
- 88. The method of claim 83, wherein said plurality of processing engines communicate as peers in a peer to peer environment.
- 89. The method of claim 83, wherein said plurality of processing engines are coupled together with a distributed interconnect.
- 90. The method of claim 89, wherein said distributed interconnect comprises a switch fabric.
- 91. The method of claim 83, wherein each of said plurality of processing engines is assigned separate information manipulation tasks in an asymmetrical multi-processor configuration.
- 92. The method of claim 91, wherein said plurality of processing engines include a processing engine that couples said system to said network.
- 93. The method of claim 91, wherein said plurality of processing engines comprise a network interface engine, a storage processing engine and an application processing engine.
- 94. The method of claim 93, wherein said plurality of processing engines further comprise a system management engine.
- 95. The method of claim 94, wherein said managing manipulation of information is performed at least in part by said system management engine.
- 96. The method of claim 95, wherein said manipulation of information comprises the delivery of information to said network.
- 97. The method of claim 96, wherein said delivery of information comprises delivery of continuous content to said network.
- 98. A method of managing information in a network connectable information management system, comprising:
monitoring a status of one or more individual processing engines adapted to manipulate information in an information management system, said information management system being connected to a network, said status of each processing engine comprising current or future availability of resources for performing an information manipulation task by said processing engine; and managing manipulation of information in a deterministic manner based at least in part on a status of said resource availability.
- 99. The method of claim 98, wherein one or more of said processing engines comprises a subsystem module that includes a monitoring agent and the resources for performing an information manipulation task by said processing engine; wherein said monitoring agent is adapted to monitor and communicate availability of said resources; and wherein said method further comprises receiving a communication from said monitoring agent regarding availability of said resources.
- 100. The method of claim 98, wherein said method further comprises managing delivery of information to said network in a deterministic manner.
- 101. The method of claim 100, wherein said method further comprises managing delivery of information to said network by rejecting a request for information delivery to said network, transferring a request for information delivery to another information management system connected to said system in a cluster, re-queuing a request for information delivery for later reconsideration by said system, or a combination thereof.
- 102. The method of claim 100, wherein said method further comprises managing delivery of information to said network by selecting one or more of said processing engines to perform manipulation of information as required to effect said delivery of information.
- 103. The system of claim 102, wherein at least one of said processing engines is assignable to perform multiple information manipulation tasks; and wherein said method further comprises assigning an information manipulation task to said processing engine as required to effect said delivery of information.
- 104. The method of claim 100, wherein said method further comprises managing delivery of information to said network based at least in part on a parameter associated with a request for delivery of said information to said network.
- 105. The method of claim 104, wherein said parameter associated with a request comprises priority information associated with said request.
- 106. The method of claim 100, wherein said method further comprises managing delivery of information to said network based at least in part on future availability of resources for performing an information manipulation task by said processing engines.
- 107. The method of claim 106, wherein said method comprises monitoring the future availability of resources of multiple individual processing engines that are capable of performing the same information manipulation task, and selecting at least one of said multiple processing engines to perform said information manipulation task based on the relative future availability of resources of said multiple individual processing engines.
- 108. The method of claim 106, wherein said method further comprises managing delivery of information to said network based on the relative future availability of resources of said multiple individual processing engines by rejecting a request for information delivery to said network, transferring a request for information delivery to another information management system connected to said system in a cluster, re-queuing a request for information delivery for later reconsideration by said system, or a combination thereof.
- 109. The method of claim 100, wherein said information comprises continuous content.
- 110. A network connectable content delivery system, comprising:
a plurality of processing engines, said processing engines having one or more resources; a network interface connection to at least one of the processor engines to couple the content delivery system to a network; a system monitor in communication with said plurality of processing engines; a distributed interconnect coupled to said processing engines to enable said processing engines and said system monitor to communicate as peers in a peer to peer environment; wherein said system monitor is configured to monitor status of resources of said processing engines, and to manage delivery of content to said network by said system in a deterministic manner based at least in part on said status of said resources.
- 111. The system of claim 110, wherein said system comprises a network endpoint content delivery system, and wherein one of said processing engines comprises a network processor.
- 112. The system of claim 110, wherein said system comprises an intermediate node system that is a network switch or network router, and wherein one of said processing engines comprises a network processor.
- 113. The method of claim 110, wherein said processing engines comprise a network interface engine, an application processing engine, and a storage processor engine.
- 114. The system of claim 113, wherein said distributed interconnect comprises a switch fabric.
- 115. The system of claim 113, wherein said plurality of processing engines further comprise a system management engine.
- 116. The system of claim 113, wherein said system monitor comprises a system management engine coupled to said processing engines by said distributed interconnect.
- 117. The system of claim 113, wherein said system further comprises one or more shared resources coupled to said processing engines by said distributed interconnect, said system monitor being adapted to monitor use of said shared resources.
- 118. The system of claim 110, wherein said system monitor is adapted to monitor current or future availability of resources of said processing engines.
- 119. The system of claim 113, wherein one or more of said processing engines comprises a subsystem module that includes a monitoring agent and resources; said monitoring agent being adapted to monitor availability of said resources and to communicate said availability to said system monitor.
- 120. The system of claim 110, wherein said system monitor is adapted to manage delivery of said content in a deterministic manner by rejecting a request for content delivery to said network, by transferring a request for content delivery to another content delivery system connected to said system in a cluster by a distributed interconnect, by re-queuing a request for content delivery for later reconsideration by the current system, or a combination thereof.
- 121. The system of claim 110, wherein said system monitor is adapted to manage delivery of said content in a deterministic manner by selecting one or more of said processing engines to perform one or more tasks as required to effect said delivery of content.
- 122. The system of claim 121, wherein at least one of said processing engines is assignable to perform multiple tasks associated with content delivery; and wherein said system monitor is adapted to manage delivery of said content in a deterministic manner by assigning one or more of said multiple tasks to said processing engine to effect said delivery of content.
- 123. The system of claim 110, wherein said system monitor is adapted to manage delivery of said content in a deterministic manner by selecting one or more processing engines of another content management system connected to said system in a cluster via a distributed interconnect to perform one or more tasks as required to effect said delivery of content.
- 124. The system of claim 121, wherein said system monitor is adapted to deterministically manage delivery of said content to said network in response to a request for content by identifying processing engine resources necessary to process said request, evaluating availability of said resources, reserving said resources, and assigning one or more tasks to one or more of said processing engines having said resources as required to effect said delivery of content.
- 125. The system of claim 110, wherein said system monitor is adapted to manage delivery of said content in a deterministic manner based at least in part on a parameter associated with a request for delivery of said information to said network.
- 126. The system of claim 125, wherein said parameter associated with a request comprises priority information associated with said request.
- 127. The system of claim 110, wherein said system monitor is adapted to select one or more of said processing engines, to select one or more unique data flow paths between said processing engines, or a combination thereof as required to effect said delivery of content in a deterministic manner.
- 128. The system of claim 127, wherein said system monitor is adapted to select one or more of said processing engines, one or more unique data flow paths between said processing engines, or a combination thereof in response to failure of one or more system components.
- 129. The system of claim 127, wherein said system monitor is adapted to select one or more of said processing engines, one or more unique data flow paths between said processing engines, or a combination thereof based in response to a current or anticipated system data flow bottleneck.
- 130. The system of claim 110, wherein said system monitor is adapted to track usage of said resources on an individual client or individual request basis.
- 131. The system of claim 110, wherein said system monitor is adapted to anticipate future usage of said resources and to select individual processing engines, data flow paths between said processing engines, or a combination thereof based on said anticipated future usage to achieve accelerated system performance.
- 132. The system of claim 110, wherein said content comprises continuous content; and wherein said resources comprise available access to storage, available processor resources, available bandwidth to enable said content to be streamed from storage, or a combination thereof.
- 133. A method of delivering content to a network, comprising:
monitoring a status of resources associated with a plurality of processing engines in a content delivery system, said processing engines communicating as peers in a peer to peer environment via a distributed interconnect coupled to said processing engines; and managing delivery of content to said network by said system in a deterministic manner based at least in part on said status of said resources.
- 134. The method of claim 133, wherein said content delivery system comprises a network endpoint system, and wherein one of said processing engines comprises a network processor.
- 135. The method of claim 133, wherein said system comprises an intermediate node system, and wherein one of said processing engines comprises a network processor.
- 136. The method of claim 133, wherein each of said plurality of processing engines is assigned separate information manipulation tasks in an asymmetrical multi-processor configuration.
- 137. The method of claim 136, wherein said plurality of processing engines include a network interface engine, a storage processing engine and an application processing engine.
- 138. The method of claim 133, wherein said distributed interconnect comprises a switch fabric.
- 139. The method of claim 133, wherein said monitoring comprises monitoring current or future availability of resources of said processing engines.
- 140. The method of claim 137, wherein one or more of said processing engines comprises a subsystem module that includes a monitoring agent and resources; said monitoring agent being adapted to monitor availability of said resources and to communicate said availability to said system monitor.
- 141. The method of claim 133, wherein said managing comprises rejecting a request for content delivery to said network, transferring a request for content delivery to another content delivery system connected to said system in a cluster by a distributed interconnect, requeuing a request for content delivery for later reconsideration by the current system, or a combination thereof.
- 142. The method of claim 133, wherein said managing comprises selecting one or more of said processing engines to perform one or more tasks as required to effect said delivery of content.
- 143. The method of claim 142, wherein at least one of said processing engines is assignable to perform multiple tasks associated with content delivery; and wherein said managing comprise assigning one or more of said multiple tasks to said processing engine to effect said delivery of content.
- 144. The method of claim 133, wherein said managing comprises selecting one or more processing engines of another content management system connected to said system in a cluster via a distributed interconnect to perform one or more tasks as required to effect said delivery of content.
- 145. The method of claim 142, wherein said managing comprises identifying processing engine resources necessary to process said request, evaluating availability of said resources, reserving said resources, and assigning one or more tasks to one or more of said processing engines having said resources as required to effect said delivery of content.
- 146. The method of claim 133, wherein said managing further comprises managing delivery of said content based at least in part on a parameter associated with a request for delivery of said content to said network.
- 147. The method of claim 146, wherein said parameter associated with a request comprises priority information associated with said request.
- 148. The method of claim 133, wherein said managing comprises selecting one or more of said processing engines, selecting one or more unique data flow paths between said processing engines, or a combination thereof as required to effect said delivery of content in a deterministic manner.
- 149. The method of claim 148, wherein managing comprises selecting one or more of said processing engines, one or more unique data flow paths between said processing engines, or a combination thereof in response to failure of one or more system components.
- 150. The method of claim 148, wherein said managing comprises selecting one or more of said processing engines, one or more unique data flow paths between said processing engines, or a combination thereof based in response to a current or anticipated system data flow bottleneck.
- 151. The method of claim 133, further comprising tracking usage of said resources on an individual client or individual request basis.
- 152. The method of claim 133, wherein said managing comprises anticipating future usage of said resources and selecting individual processing engines, data flow paths between said processing engines, or a combination thereof based on said anticipated future usage to achieve accelerated system performance.
- 153. The method of claim 133, wherein said content comprises continuous content; and wherein said resources comprise available access to storage, available processor resources, available bandwidth to enable said content to be streamed from storage, or a combination thereof.
Parent Case Info
[0001] This application claims priority from Provisional Application Serial No. 60/246,401 filed on Nov. 7, 2000 which is entitled “SYSTEMS AND METHODS FOR THE DETERMINISTIC MANAGEMENT OF INFORMATION” and to Provisional Application Serial No. 60/187,211 filed on Mar. 3, 2000 which is entitled “SYSTEM AND APPARATUS FOR INCREASING FILE SERVER BANDWIDTH,” the disclosures of each being incorporated herein by reference.
Provisional Applications (2)
|
Number |
Date |
Country |
|
60246401 |
Nov 2000 |
US |
|
60187211 |
Mar 2000 |
US |