Claims
- 1. A method, comprising:
providing a first set of prefetch data, wherein providing the first set of prefetch data is inhibited from interfering with demand requests on a network; and providing a second set of prefetch data, wherein the second set of prefetch data is inhibited from interfering with demand requests at a server.
- 2. The method of claim 1, further comprising providing a third set of prefetch data, wherein the third set of prefetch data is inhibited from interfering with demand requests at a client.
- 3. The method of claim 2, wherein determining the third set of prefetch data comprises delaying initiation of prefetching until demand requests have been met.
- 4. The method of claim 2, wherein determining the third set of prefetch data comprises prioritizing demand requests ahead of prefetch requests in cache replacement.
- 5. The method of claim 1, further comprising sending the second set of prefetch data to one or more clients via the network.
- 6. The method of claim 5, wherein a rate at which the second set of prefetch data is sent to one or more clients depends on the load on the server at about the time the second set of prefetch data is sent.
- 7. The method of claim 1, wherein the second set of prefetch data is provided using server prioritization.
- 8. The method of claim 1, wherein the second set of prefetch data is provided using a response time monitor to determine an amount of data to provide in the second set of prefetch data.
- 9. The method of claim 8, wherein the response time monitor measures response time of one or more demand requests.
- 10. The method of claim 8, wherein the response time monitor initiates one or more measurement requests and measures the response time of the one or more measurement requests.
- 11. The method of claim 8, wherein the amount of data provided in the second set of prefetch data is reduced if one or more response times measured by the response time monitor exceed one or more threshold values.
- 12. The method of claim 8, wherein the amount of data provided in the second set of prefetch data is increased if one or more response times measured by the response time monitor are below one or more threshold values.
- 13. The method of claim 1, wherein the second set of prefetch data is provided using a response time monitor to determine a rate of providing the second set of prefetch data.
- 14. The method of claim 13, wherein the response time monitor measures the response time of one or more demand requests.
- 15. The method of claim 13, wherein the response time monitor initiates one or more measurement requests and measures the response time of the one or more measurement requests.
- 16. The method of claim 13, wherein the rate of providing prefetch data is reduced if one or more response times measured by the response time monitor exceed one or more threshold values.
- 17. The method of claim 13, wherein the rate of providing prefetch data is increased if one or more response times measured by the response time monitor are below one or more threshold values.
- 18. The method of claim 1, wherein the first set of prefetch data is provided using router prioritization.
- 19. The method of claim 1, wherein the first set of prefetch data is provided using a network congestion control protocol that inhibits interference with demand requests.
- 20. The method of claim 1, wherein the first set of prefetch data is provided using a network congestion control protocol, wherein the network congestion control protocol measures the time between sending at least one amount of data and receiving acknowledgement of receipt of at least one amount of data, and wherein the network congestion control protocol reduces network sending rate by at least a multiplicative factor if a threshold fraction of times between sending at least one amount of data and receiving acknowledgement of receipt of at least one amount of data exceed a threshold round trip time.
- 21. The method of claim 1, wherein providing the first set of prefetch data comprises assessing whether significant network congestion exists on the network and sizing the first set of prefetch data such that the first set of prefetch data does not significantly increase the network congestion.
- 22. A method, comprising:
receiving a request for one or more data packets; sending one or more data packets; determining a time that a first data packet was sent; receiving an acknowledgement of receipt of at least the first data packet; determining a time that the acknowledgement of receipt of the first data packet was received; determining an estimate of network congestion based at least in part on the time the first data packet was sent and the time the acknowledgement of receipt of the first data packet was received; determining if the estimate of network congestion indicates the existence of significant network congestion; and if the estimate of network congestion indicates that significant network congestion exists, then reducing the size of a congestion window.
- 23. The method of claim 22, wherein significant congestion is determined to exist if the estimate of network congestion exceeds a determined fraction of the estimated bottleneck queue size.
- 24. The method of claim 22, wherein reducing the size of the congestion window comprises reducing the size of the congestion window by at least a multiplicative factor.
- 25. The method of claim 22, wherein reducing the size of the congestion window comprises reducing the size of the congestion window to one half of its previous size.
- 26. The method of claim 22, wherein reducing the size of the congestion window comprises decreasing the size of the congestion window to less than one data packet.
- 27. The method of claim 22, wherein the congestion window determines the amount of prefetch data desired to be in transit at any one time.
- 28. The method of claim 22, wherein sending one or more data packets comprises sending one or more pointers to one or more requested data packets.
- 29. The method of claim 22, wherein sending one or more data packets comprises sending one or more requested data packets.
- 30. The method of claim 22, wherein determining the estimate of network congestion comprises determining a number of round trip times received during an interval that exceed a determined threshold round trip time, wherein a round trip time comprises an elapsed time between the time that a data packet is sent and the time that an acknowledgement of receipt of the data packet is received.
- 31. The method of claim 22, wherein determining the estimate of network congestion comprises determining a number of round trip times received during an interval that exceed a threshold round trip time, wherein a round trip time comprises an elapsed time between the time that a data packet is sent and the time that an acknowledgement of receipt of the data packet is received; and wherein significant network congestion is determined to exist if the number of round trip times that exceed the threshold round trip time during the interval exceeds a threshold number.
- 32. The method of claim 22, wherein determining the estimate of network congestion comprises determining a number of round trip times received during an interval that exceed a threshold round trip time, wherein a round trip time comprises an elapsed time between the time that a data packet is sent and the time that an acknowledgement of receipt of the data packet is received; and wherein significant network congestion is determined to exist if the number of round trip times that exceed the threshold round trip time during the interval exceeds a threshold fraction of the number of round trip times measured.
- 33. The method of claim 22, wherein determining the estimate of network congestion comprises determining a number of round trip times received during an interval that exceed a threshold round trip time, wherein a round trip time comprises an elapsed time between the time that a data packet is sent and the time that an acknowledgement of receipt of the data packet is received; and wherein significant network congestion is determined to exist if at least a first fraction of measured round trip times during an interval exceed a threshold; and wherein a threshold is the uncongested round trip time plus a second fraction of the difference between an estimated congested round trip time and an estimated uncongested round trip time.
- 34. The method of claim 33, wherein the estimated uncongested round trip time comprises a minimum determined round trip time for a data packet.
- 35. The method of claim 33, wherein the estimated uncongested round trip time comprises a minimum determined round trip time for a data packet over a defined period of time.
- 36. The method of claim 33, wherein the estimated uncongested round trip time comprises a decaying running average of minimum determined round trip times.
- 37. The method of claim 33, wherein the estimated uncongested round trip time comprises a minimum determined round trip time within a determined percentile of round trip times for data packets sent.
- 38. The method of claim 33, wherein the estimated congested round trip time comprises a maximum determined round trip time for a data packet.
- 39. The method of claim 33, wherein the estimated congested round trip time comprises a maximum determined round trip time for a data packet over a defined period of time.
- 40. The method of claim 33, wherein the estimated congested round trip time comprises a decaying running average of maximum determined round trip times.
- 41. The method of claim 33, wherein the estimate of the congested round trip time comprises a maximum determined round trip time within a determined percentile of round trip times for data packets sent.
- 42. The method of claim 22, further comprising increasing the size of a congestion window if the estimate of network congestion is not greater than a determined fraction of the congestion window.
- 43. The method of claim 22, further comprising linearly increasing the size of a congestion window if the estimate of network congestion is not greater than a determined fraction of the congestion window.
- 44. The method of claim 22, further comprising increasing the size of a congestion window by a determined number of data packets per determined number of round trip times if the estimate of network congestion is not greater than a determined fraction of the congestion window.
- 45. The method of claim 22, further comprising increasing the size of a congestion window by a determined number of data packets and by a determined multiplicative factor per determined number of round trip times if the estimate of network congestion is not greater than a determined fraction of the congestion window
- 46. The method of claim 22, further comprising determining a rate at which to send one or more data packets.
- 47. The method of claim 22, further comprising sending a second data packet, wherein a time period between sending of the first data packet and sending of the second data packet is determined based on the size of the congestion window.
- 48. The method of claim 22, wherein sending one or more data packets comprises sending at least a second data packet, wherein the method further comprises determining if an acknowledgement of receipt of the second data packet is received within a determined time period; and resending the second data packet if the acknowledgment of receipt of the second data packet is not received within the determined time period.
- 49. The method of claim 48, further comprising reducing the size of the congestion window if the second data packet is resent.
- 50. The method of claim 22, wherein sending one or more data packets comprises sending at least a second data packet, wherein the method further comprises determining if a determined time period has expired since sending the second data packet without receiving an acknowledgement of receipt of the second data packet; and resending the second data packet if the determined time period has expired since sending the second data packet without receiving an acknowledgement of receipt of the second data packet.
- 51. The method of claim 50, further comprising reducing the size of the congestion window if the second data packet is resent.
- 52. The method of claim 22, wherein determining the estimate of network congestion comprises providing an estimate of an uncongested throughput; providing an estimate of a congested throughput; determining a round trip time of the first data packet; and determining the estimate of network congestion based on estimated uncongested throughput, the congested throughput and the round trip time of the first data packet.
- 53. A carrier medium comprising program instructions, wherein the program instructions are executable to implement a method of:
receiving a request for one or more data packets; sending one or more data packets; determining a time that a first data packet was sent; receiving an acknowledgement of receipt of at least the first data packet; determining a time that the acknowledgement of receipt of the first data packet was received; determining an estimate of network congestion based at least in part on the time the first data packet was sent and the time the acknowledgement of receipt of the first data packet was received; determining if the estimate of network congestion indicates the existence of significant network congestion; and reducing the size of a congestion window if the estimate of network congestion indicates that significant network congestion exists.
- 54. A method, comprising:
determining end-to-end network performance; determining an estimate of network congestion based at least in part on the end-to-end network performance; and reducing the size of a congestion window if significant network congestion is determined to exist.
- 55. The method of claim 54, wherein reducing the size of the congestion window comprises reducing the size of the congestion window by at least a multiplicative factor.
- 56. The method of claim 54, wherein reducing the size of the congestion window comprises reducing the size of the congestion window to one half of its previous size.
- 57. The method of claim 54, wherein reducing the size of the congestion window comprises decreasing the size of the congestion window to less than one data packet.
- 58. The method of claim 54, wherein determining the estimate of network congestion comprises determining a round trip time of a first data packet and determining the estimate of network congestion based on the round trip time and the size of the congestion window.
- 59. The method of claim 54, wherein determining end-to-end network performance comprises determining a round trip time for one or more data packets sent via the congestion window.
- 60. The method of claim 54, wherein determining end-to-end network performance comprises determining a round trip time for one or more data packets sent via the congestion window, and wherein determining the estimate of network congestion comprises determining a number of round trip times during an interval that exceed a determined threshold round trip time.
- 61. The method of claim 54, wherein determining end-to-end network performance comprises determining a round trip time for one or more data packets sent via the congestion window, wherein determining the estimate of network congestion comprises determining a number of round trip times during an interval that exceed a determined threshold round trip time, and wherein significant network congestion is determined to exist if the number of round trip times that exceed the threshold round trip time during the interval exceeds a threshold number.
- 62. The method of claim 54, wherein determining end-to-end network performance comprises determining a round trip time for one or more data packets sent via the congestion window, wherein determining the estimate of network congestion comprises determining a number of round trip times during an interval that exceed a determined threshold round trip time, and wherein significant network congestion is determined to exist if the number of round trip times that exceed the threshold round trip time during the interval exceeds a fraction of the difference between an estimated congested round trip and an estimated uncongested round trip time.
- 63. The method of claim 62, wherein the estimated uncongested round trip time comprises a minimum determined round trip time for a data packet.
- 64. The method of claim 62, wherein the estimated uncongested round trip time comprises a minimum determined round trip time for a data packet over a defined period of time.
- 65. The method of claim 62, wherein the estimated uncongested round trip time comprises a decaying running average of minimum determined round trip times.
- 66. The method of claim 62, wherein the estimated uncongested round trip time comprises a minimum determined round trip time within a determined percentile of round trip times for data packets sent.
- 67. The method of claim 62, wherein the estimated congested round trip time comprises a maximum determined round trip time for a data packet.
- 68. The method of claim 62, wherein the estimated congested round trip time comprises a maximum determined round trip time for a data packet over a defined period of time.
- 69. The method of claim 62, wherein the estimated congested round trip time comprises a decaying running average of maximum determined round trip times.
- 70. The method of claim 62, wherein the estimated congested round trip time comprises a maximum determined round trip time within a determined percentile of round trip times for data packets sent.
- 71. The method of claim 54, wherein determining end-to-end network performance comprises determining a round trip time for one or more data packets sent via the congestion window, wherein determining the estimate of network congestion comprises providing an estimate of an uncongested round trip time for a data packet; providing an estimate of a congested round trip time of a data packet; determining a round trip time of the first data packet; and determining the estimate of network congestion based on the estimated uncongested round trip time, the estimated congested round trip time and the round trip time of the first data packet.
- 72. The method of claim 71, wherein providing the estimate of the uncongested round trip time for a data packet comprises determining a minimum round trip time for a data packet.
- 73. The method of claim 71, wherein providing the estimate of the uncongested round trip time for a data packet comprises determining a minimum round trip time for a data packet over a defined period of time.
- 74. The method of claim 71, wherein providing the estimate of the uncongested round trip time for a data packet comprises determining a decaying running average of minimum round trip times for data packets.
- 75. The method of claim 71, wherein providing the estimate of the uncongested round trip time for a data packet comprises determining a minimum round trip time for a data packet within a determined percentile of round trip times for data packets sent.
- 76. The method of claim 71, wherein providing the estimate of the congested round trip time for a data packet comprises determining a maximum round trip time for a data packet.
- 77. The method of claim 71, wherein providing the estimate of the congested round trip time for a data packet comprises determining a maximum round trip time for a data packet over a defined period of time.
- 78. The method of claim 71, wherein providing the estimate of the congested round trip time for a data packet comprises determining a decaying running average of maximum round trip times for data packets.
- 79. The method of claim 71, wherein providing the estimate of the congested round trip time for a data packet comprises determining a maximum round trip time for a data packet within a determined percentile of round trip times for data packets sent.
- 80. The method of claim 54, further comprising increasing the size of the congestion window if the estimate of network congestion is not greater than a determined fraction of the congestion window.
- 81. The method of claim 54, further comprising linearly increasing the size of the congestion window if the estimate of network congestion is not greater than a determined fraction of the congestion window.
- 82. The method of claim 54, further comprising increasing the size of the congestion window by one data packet if the estimate of network congestion is not greater than a determined fraction of the congestion window.
- 83. The method of claim 54, further comprising determining a rate at which to send one or more data packets.
- 84. The method of claim 54, wherein determining end-to-end network performance comprises determining an estimate of throughput at a particular time.
- 85. The method of claim 54, wherein determining end-to-end network performance comprises determining an estimate of throughput at a particular time, and wherein determining an estimate of network congestion comprises providing an estimate of an uncongested throughput; providing an estimate of a congested throughput; and determining the estimate of network congestion based on the throughput at the particular time, the estimate of uncongested throughput and the estimate of congested throughput.
- 86. A method of sending a plurality of data packets via a network, the method comprising:
sending a first plurality of data packets over a network using a first protocol, wherein the first plurality of data packets comprises one or more demand data packets, wherein a demand data packet comprises a data packet requested by a user; sending a second plurality of data packets over the network using a second protocol, wherein the second plurality of data packets comprises one or more prefetch data packets, wherein a prefetch data packet comprises a data packet not explicitly requested by the user; and wherein the second protocol is configured so that the sending of the second plurality of data packets is inhibited from interfering with the sending of the first plurality of data packets.
- 87. The method of claim 86, wherein the second protocol is configured to reduce the size of a congestion window associated with the second plurality of data packets to inhibit sending of the second plurality of data packets from interfering with the sending of the first plurality of data packets.
- 88. The method of claim 87, wherein reducing the size of the congestion window comprises reducing the size of the congestion window to one half of its previous size.
- 89. The method of claim 87, wherein reducing the size of the congestion window comprises decreasing the size of the congestion window to less than one data packet.
- 90. The method of claim 87, wherein the congestion window determines the amount of prefetch data desired to be in transit at any one time.
- 91. The method of claim 86, further comprising sending one or more pointers to one or more data packets.
- 92. The method of claim 86, wherein sending the second plurality of data packets is inhibited from interfering with the sending the first plurality of data packets by determining an estimate of network congestion and adjusting the size of a congestion window associated with the second plurality of data packets to inhibit interference with sending of the first plurality of data packets.
- 93. The method of claim 92, wherein determining the estimate of network congestion comprises determining a round trip time of a particular data packet and determining the estimate of network congestion based on the round trip time and the size of a congestion window.
- 94. The method of claim 92, wherein determining the estimate of network congestion comprises providing an estimate of an uncongested round trip time for a data packet; providing an estimate of a congested round trip time of a data packet; determining a round trip time of a particular data packet; and determining the estimate of network congestion based on the estimated uncongested round trip time, the estimated congested round trip time and the round trip time of the particular data packet.
- 95. The method of claim 94, wherein providing the estimate of the uncongested round trip time for a data packet comprises determining a minimum round trip time for a data packet.
- 96. The method of claim 94, wherein providing the estimate of the uncongested round trip time for a data packet comprises determining a minimum round trip time for a data packet over a defined period of time.
- 97. The method of claim 94, wherein providing the estimate of the uncongested round trip time for a data packet comprises determining a decaying running average of minimum round trip times for data packets.
- 98. The method of claim 94, wherein providing the estimate of the uncongested round trip time for a data packet comprises determining a minimum round trip time for a data packet within a determined percentile of round trip times for data packets sent.
- 99. The method of claim 94, wherein providing the estimate of the congested round trip time for a data packet comprises determining a maximum round trip time for a data packet.
- 100. The method of claim 94, wherein providing the estimate of the congested round trip time for a data packet comprises determining a maximum round trip time for a data packet over a defined period of time.
- 101. The method of claim 94, wherein providing the estimate of the congested round trip time for a data packet comprises determining a decaying running average of maximum round trip times for data packets.
- 102. The method of claim 94, wherein providing the estimate of the congested round trip time for a data packet comprises determining a maximum round trip time for a data packet within a determined percentile of round trip times for data packets sent.
- 103. The method of claim 86, further comprising increasing the size of a congestion window associated with the second plurality of data packets if potential interference with sending of the first plurality of data packets is not detected.
- 104. The method of claim 86, wherein the second plurality of data packets is inhibited from interfering with the sending of the first plurality of data packet by adjusting the size of a congestion window.
- 105. The method of claim 86, wherein the second plurality of data packets is inhibited from interfering with the sending of the first plurality of data packet by adjusting the size of a congestion window; and wherein if an estimate of network congestion is not greater than a determined fraction of the congestion window, then increasing the size of the congestion window.
- 106. The method of claim 86, further comprising determining a rate at which to send one or more prefetch packets.
- 107. The method of claim 86, wherein the length of a time period between sending of a first prefetch data packet and sending of a second prefetch data packet is determined based on the size of a congestion window.
- 108. The method of claim 86, wherein sending the second plurality of data packets comprises sending at least a first prefetch data packet; determining if an acknowledgement of receipt of the first prefetch data packet is received within a determined time period; and resending the first prefetch data packet if the acknowledgment of receipt of the first prefetch data packet is not received within the determined time period.
- 109. The method of claim 108, further comprising reducing the size of a congestion window associated with the second plurality of data packets if the first prefetch data packet is resent.
- 110. The method of claim 86, wherein sending the second plurality of data packets comprises sending at least a first prefetch data packet; determining if a determined time period has expired without receiving an acknowledgement of receipt of the first prefetch data packet; and resending the first prefetch data packet if the determined time period has expired without receiving an acknowledgement of receipt of the first prefetch data packet.
- 111. The method of claim 110, further comprising reducing the size of a congestion window associated with the second plurality of data packets if the first prefetch data packet is resent.
- 112. The method of claim 86, wherein inhibiting the second plurality of data packets from interfering with the sending of the first plurality of data packet comprises providing an estimate of an uncongested throughput; providing an estimate of a congested throughput; determining a round trip time of a particular data packet; and determining an estimate of network congestion based on the estimated uncongested throughput, the estimated congested throughput and the round trip time of particular data packet.
- 113. A system, comprising:
at least one server coupled to a network, wherein at least one server is configured to send demand data packets via the network using a first protocol during use; wherein at least one server is configured to send prefetch data packets via the network using a second protocol during use; and wherein the second protocol is configured to inhibit prefetch data packets from interfering with the sending of demand data packets.
- 114. The system of claim 113, further comprising a hint server coupled to the network, wherein the hint server is configured to send hint lists via the network during use.
- 115. The system of claim 113, further comprising a hint server coupled to the network, wherein the hint server is configured to estimate congestion on at least one server and to send hint lists via the network during use, wherein hint lists sent by the hint server are sized to inhibit prefetching of hint list objects from causing congestion on at least one server.
- 116. The system of claim 113, further comprising a hint server coupled to the network, wherein the hint server is configured to estimate congestion on at least one server by measuring server response time and to send hint lists via the network during use, wherein hint lists sent by the hint server are sized to inhibit prefetching of hint list objects from causing congestion on at least one server.
- 117. The system of claim 113, further comprising a hint server coupled to the network, wherein the hint server is configured estimate congestion on at least one server by measuring server response time and detecting congestion when measured server response time exceeds a value and to send hint lists via the network during use, wherein hint lists sent by the hint server are sized to inhibit prefetching of hint list objects from causing congestion on at least one server.
- 118. The system of claim 113, further comprising a hint server coupled to the network, wherein the hint server is configured to estimate congestion on at least one server by measuring server response time and to send hint lists via the network during use, wherein hint lists sent by the hint server are sized to inhibit prefetching of hint list objects from causing congestion on at least one server by increasing total hint list size across all clients during an interval when no server congestion is detected and reducing total hint list size across all clients during an interval when server congestion is detected.
- 119. The system of claim 113, further comprising a hint server coupled to the network, wherein the hint server is configured estimate congestion on at least one server and to send hint lists via the network during use, wherein hint lists sent by the hint server are sized to utilize a significant portion of available server capacity for prefetching of hint list objects.
- 120. The system of claim 113, further comprising a hint server coupled to the network, wherein the hint server is configured to determine an estimate of probability of one or more data objects on at least one server being requested as a demand request during use.
- 121. The system of claim 113, wherein at least one server is configured to prioritize the service of demand requests over the service of prefetch requests.
- 122. The system of claim 113, further comprising a front-end application between the network and at least one server, wherein the front-end application is configured to determine whether a received request is a prefetch request or a demand request during use.
- 123. The system of claim 113, further comprising a front-end application between the network and at least one server, wherein the front-end application is configured to determine whether a received request is a prefetch request or a demand request; and to route the request to a demand server during use if the received request is a demand request.
- 124. The system of claim 113, further comprising a front-end application between the network and at least one server, wherein the front-end application is configured to determine whether a received request is a prefetch request or a demand request; and to provide a redirection data object in response to the request during use if the received request is a prefetch request.
- 125. The system of claim 113, wherein at least one server comprises a demand server.
- 126. The system of claim 113, wherein at least one server comprises a prefetch server.
- 127. The system of claim 113, wherein at least one server comprises a demand server and a prefetch server.
- 128. The system of claim 113, wherein at least one server comprises a demand server, wherein a demand server comprises one or more data objects associated via one or more relative references; and wherein at least one server comprises a prefetch server; wherein a prefetch server comprises one or more duplicate data objects associated via one or more absolute references, wherein a duplicate data object comprises a data object that is substantially a duplicate of a data object of a demand server.
- 129. The system of claim 113, wherein at least one server is configured to respond to requests using the first protocol or the second protocol depending on whether a received request is a prefetch request or a demand request.
- 130. The system of claim 113, wherein at least one server is configured to respond to requests using the first protocol or the second protocol on a connection by connection basis.
- 131. The system of claim 113, further comprising a monitor coupled to the network, wherein the monitor is configured to determine an estimate of server congestion during use.
- 132. The system of claim 113, wherein the second protocol comprises a TCP-NICE protocol.
- 133. A method, comprising:
providing a transmission path for transmission of data packets between two or more computer systems, wherein the transmission path comprises at least one router buffer; determining an estimate of congestion along the transmission path at a time when at least one router buffer is not full; and reducing a size of a congestion window by at least a multiplicative factor if significant congestion is determined to exist according to the estimate of congestion.
- 134. The method of claim 133, wherein determining the estimate of congestion comprises determining an estimated uncongested round trip time for transmission of a data packet; determining an estimated congested round trip time for transmission of a data packet; determining an actual round trip time for transmission of a particular data packet; and comparing the estimated uncongested round trip time, the estimated congested round trip time, and the actual round trip time to determine an estimate of congestion along the transmission path.
- 135. The method of claim 134, wherein determining the estimated uncongested round trip time comprises determining a minimum round trip time experienced by a data packet transmitted along the transmission path.
- 136. The method of claim 134, wherein determining the estimated uncongested round trip time comprises determining a minimum round trip time experienced by a data packet transmitted along the transmission path within a predetermined time period.
- 137. The method of claim 134, wherein determining the estimated uncongested round trip time comprises determining a decaying average minimum round trip time experienced by a data packet transmitted along the transmission path.
- 138. The method of claim 134, wherein determining the estimated uncongested round trip time comprises determining a significant minimum round trip time, wherein the significant minimum round trip time comprises a minimum round experienced by a data packet transmitted along the transmission path excluding one or more round trip times.
- 139. The method of claim 138, wherein one or more excluded round trip times comprise statistically insignificant round trip times.
- 140. The method of claim 138, wherein one or more excluded round trip times comprise round trip times beyond a selected percentile of round trip times.
- 141. The method of claim 134, wherein determining the estimated congested round trip time comprises determining a maximum round trip time experienced by a data packet transmitted along the transmission path.
- 142. The method of claim 134, wherein determining the estimated congested round trip time comprises determining a maximum round trip time experienced by a data packet transmitted along the transmission path within a predetermined time period.
- 143. The method of claim 134, wherein determining the estimated congested round trip time comprises determining a decaying average maximum round trip time experienced by a data packet transmitted along the transmission path.
- 144. The method of claim 134, wherein determining the estimated congested round trip time comprises determining a significant maximum round trip time, wherein the significant maximum round trip time comprises a maximum round trip time experienced by a data packet transmitted along the transmission path excluding one or more round trip times.
- 145. The method of claim 144, wherein one or more excluded round trip times comprise statistically insignificant round trip times.
- 146. The method of claim 144, wherein one or more excluded round trip times comprise round trip times beyond a selected percentile of round trip times.
- 147. The method of claim 134, wherein determining the actual round trip time comprises determining a time that the particular data packet was sent and determining a time that an acknowledgement of receipt of the particular data packet was received.
- 148. The method of claim 133, wherein determining the estimate of congestion along the transmission path at a time when at least one router buffer is not full comprises determining an estimate of a queue size of at least one router buffer.
- 149. The method of claim 133, wherein determining the estimate of congestion along the transmission path at a time when at least one router buffer is not full comprises determining an estimate of a queue size of at least one router buffer and determining if the queue size exceeds a specified fraction of a capacity of at least one router buffer.
- 150. The method of claim 133, wherein determining the estimate of congestion along the transmission path at a time when at least one router buffer is not full comprises determining an estimate of a capacity of at least one router buffer based at least in part on an uncongested round trip time and a congested round trip time; determining an estimate of a queue size of at least one router buffer based on a particular round trip time; and determining if the queue size exceeds a specified fraction of the capacity of at least one router buffer.
- 151. The method of claim 133, wherein the congestion window comprises a congestion window associated with one or more prefetch data packets.
- 152. The method of claim 133, wherein reducing the size of the congestion window by at least a multiplicative factor comprises halving the size of the congestion window.
- 153. The method of claim 133, wherein reducing the size of the congestion window by at least a multiplicative factor comprises reducing the size of the congestion window to less than one data packet.
- 154. The method of claim 133, wherein at least one router buffer comprises a transmission rate limiting router buffer.
- 155. A method, comprising:
determining an estimate of congestion along a transmission path of one or more data packets; and reducing the size of a congestion window to a non-integer value if significant congestion exists based on the estimate of congestion.
- 156. The method of claim 155, wherein the size of the congestion window corresponds to an amount of bandwidth available for transmission of one or more data packets.
- 157. The method of claim 155, wherein the size of the congestion window corresponds to a rate of transmission of one or more data packets.
- 158. The method of claim 155, wherein the non-integer value is less than 1.
- 159. A system comprising:
a CPU in communication with a network; a memory coupled to the CPU, wherein the memory comprises program instructions executable by the CPU to:
determine an estimate of congestion along a transmission path of one or more data packets through the network; and reduce the size of a congestion window to a non-integer value if significant congestion exists based on the estimate of congestion, wherein the size of the congestion window corresponds to the amount of bandwidth available for transmission of one or more data packets.
- 160. The system of claim 159, wherein the size of the congestion window corresponds to an amount of bandwidth available for transmission of one or more data packets.
- 161. The system of claim 159, wherein the size of the congestion window corresponds to a rate of transmission of one or more data packets.
- 162. The system of claim 159, wherein the non-integer value is less than 1.
- 163. A carrier medium comprising program instructions, wherein the program instructions are computer-executable to implement a method comprising:
determining an estimate of congestion along a transmission path of one or more data packets; and reducing the size of a congestion window to a non-integer value if significant congestion exists based on the estimate of congestion, wherein the size of the congestion window corresponds to the amount of bandwidth available for transmission one or more data packets.
- 164. The carrier medium of claim 163, wherein the size of the congestion window corresponds to an amount of bandwidth available for transmission of one or more data packets.
- 165. The carrier medium of claim 163, wherein the size of the congestion window corresponds to a rate of transmission one or more data packets.
- 166. The carrier medium of claim 163, wherein the non-integer value is less than 1.
- 167. A method, comprising:
sending a request for one or more data packets; receiving one or more requested data packets and one or more data packet prefetch hints, wherein a data packet prefetch hint comprises a suggestion to prefetch one or more data packets; determining if one or more data packet prefetch hints refer to one or more data packets available in a local memory; and determining one or more data packets to prefetch.
- 168. The method of claim 167, further comprising receiving input requesting one or more data packets before sending the request for one or more data packets.
- 169. The method of claim 167, wherein determining one or more data packets to prefetch comprises determining one or more data packets that do not exist in the local memory that are referred to by one or more data packet prefetch hints.
- 170. The method of claim 167, further comprising sending a request for one or more prefetch data packets.
- 171. The method of claim 167, further comprising sending a request for one or more prefetch data packets; and receiving one or more prefetch data packets.
- 172. The method of claim 167, further comprising sending a request for one or more prefetch data packets; receiving one or more prefetch data packets; and storing one or more prefetch data packets in the local memory.
- 173. The method of claim 167, further comprising sending a request for one or more prefetch data packets; receiving one or more pointers to one or more requested data packets; and requesting one or more data packets referred to by one or more pointers.
- 174. The method of claim 167, further comprising, receiving input requesting one or more data packets; and determining if one or more requested data packets exist in the local memory, before sending the request for one or more data packets.
- 175. The method of claim 167, further comprising, receiving one or more data packets; and sending an acknowledgement of receipt of one or more data packets.
- 176. The method of claim 167, further comprising, receiving one or more data packets; and displaying one or more received data packets.
- 177. The method of claim 167, further comprising, receiving one or more data packets; and storing one or more received data packets in the local memory.
- 178. The method of claim 167, further comprising, receiving one or more prefetch data packets; receiving a request to access one or more data packets while receiving one or more prefetch data packets; and ceasing to receive one or more prefetch data packets in response to the request to access one or more data packets.
- 179. A system comprising:
a CPU in communication with a network; a memory coupled to the CPU, wherein the memory comprises program instructions executable to:
send a request for one or more data packets via the network; receive one or more requested data packets and one or more data packet prefetch hints, wherein a data packet prefetch hint comprises a suggestion to prefetch one or more data packets; determine if one or more data packet prefetch hints refer to one or more data packets available in a local memory; and determine one or more data packets to prefetch.
- 180. The system of claim 179, wherein the program instructions are further executable to receive input requesting one or more data packets before sending the request for one or more data packets.
- 181. The system of claim 179, wherein determining one or more data packets to prefetch comprises determining one or more data packets that do not exist in the local memory that are referred to by one or more data packet prefetch hints.
- 182. The system of claim 179, wherein the program instructions are further executable to send a request for one or more prefetch data packets.
- 183. The system of claim 179, wherein the program instructions are further executable to send a request for one or more prefetch data packets, and to receive one or more prefetch data packets.
- 184. The system of claim 179, wherein the program instructions are further executable to send a request for one or more prefetch data packets, receive one or more prefetch data packets, and store one or more prefetch data packets in the local memory.
- 185. The system of claim 179, wherein the program instructions are further executable to send a request for one or more prefetch data packets, to receive one or more pointers to one or more requested data packets, and to request one or more data packets referred to by one or more pointers.
- 186. The system of claim 179, wherein the program instructions are further executable to receive input requesting one or more data packets, and to determine if one or more requested data packets exist in a local memory before sending the request for one or more data packets.
- 187. The system of claim 179, wherein the program instructions are further executable to receive one or more data packets, and to send an acknowledgement of receipt of one or more data packets.
- 188. The system of claim 179, wherein the program instructions are further executable to receive one or more data packets, and to display one or more received data packets.
- 189. The system of claim 179, wherein the program instructions are further executable to receive one or more data packets, and to store one or more received data packets in the local memory.
- 190. The system of claim 179, wherein the program instructions are further executable to receive one or more prefetch data packets, to receive a request to access one or more data packets while receiving one or more prefetch data packets, and to cease to receive one or more prefetch data packets in response to the request to access one or more data packets.
- 191. A carrier medium comprising program instructions, wherein the program instructions are computer-executable to implement a method comprising:
sending a request for one or more data packets; receiving one or more requested data packets and one or more data packet prefetch hints, wherein a data packet prefetch hint comprises a suggestion to prefetch one or more data packets; determining if one or more data packet prefetch hints refer to one or more data packets available in a local memory; and determining one or more data packets to prefetch.
- 192. The carrier medium of claim 191, wherein the method further comprises receiving input requesting one or more data packets before sending the request for one or more data packets.
- 193. The carrier medium of claim 191, wherein determining one or more data packets to prefetch comprises determining one or more data packets that do not exist in the local memory that are referred to by one or more data packet prefetch hints.
- 194. The carrier medium of claim 191, wherein the method further comprises sending a request for one or more prefetch data packets.
- 195. The carrier medium of claim 191, wherein the method further comprises sending a request for one or more prefetch data packets, and receiving one or more prefetch data packets.
- 196. The carrier medium of claim 191, wherein the method further comprises sending a request for one or more prefetch data packets, receiving one or more prefetch data packets, and storing one or more prefetch data packets in the local memory.
- 197. The carrier medium of claim 191, wherein the method further comprises sending a request for one or more prefetch data packets, receiving one or more pointers to one or more requested data packets, and requesting one or more data packets referred to by one or more pointers.
- 198. The carrier medium of claim 191, wherein the method further comprises, receiving input requesting one or more data packets, and determining if one or more requested data packets exist in a local memory before sending the request for one or more data packets.
- 199. The carrier medium of claim 191, wherein the method further comprises receiving one or more data packets; and sending an acknowledgement of receipt of one or more data packets.
- 200. The carrier medium of claim 191, wherein the method further comprises receiving one or more data packets; and displaying one or more received data packets.
- 201. The carrier medium of claim 191, wherein the method further comprises receiving one or more data packets; and storing one or more received data packets in the local memory.
- 202. The carrier medium of claim 191, wherein the method further comprises receiving one or more prefetch data packets, receiving a request to access one or more data packets while receiving one or more prefetch data packets, and ceasing to receive one or more prefetch data packets in response to the request to access one or more data packets.
- 203. A method, comprising:
receiving an indication of server congestion; receiving a reference list; and determining a hint list based at least in part on the reference list, wherein the hint list comprises one or more data objects recommended for prefetching, and determining a hint list size based at least in part on the indication of server congestion.
- 204. The method of claim 203, wherein determining the hint list based at least in part on the reference list comprises determining one or more data objects with a probability of a demand request greater than a threshold probability based at least in part on the reference list.
- 205. The method of claim 203, wherein the reference list comprises information regarding files previously requested by a client from which the reference list was received.
- 206. The method of claim 203, wherein determining the hint list based at least in part on the reference list comprises determining a probability of receiving a request for one or more data objects, and selecting one or more data objects having a relatively high probability of being requested for inclusion in the hint list.
- 207. The method of claim 203, further comprising sending the hint list to a client.
- 208. The method of claim 207, wherein the client is the client that sent the reference list.
- 209. The method of claim 207, wherein the client is a client other than the client that sent the reference list.
- 210. The method of claim 203 further comprising sending one or more portions of the hint list to a client in one or more separate transmissions.
- 211. The method of claim 203, further comprising sending the hint list to a client that sent the reference list in an order that causes an inline object to be prefetched before a data object that refers to the inline object.
- 212. The method of claim 203, wherein receiving the indication of server congestion comprises receiving a recommended prefetch rate.
- 213. The method of claim 203, wherein receiving the indication of server congestion comprises receiving an estimate of network congestion based on round trip time of one or more requests to a server.
- 214. The method of claim 203, wherein receiving the indication of server congestion comprises receiving a recommended hint list size.
- 215. The method of claim 203, wherein the hint list size comprises a number of data objects recommended for prefetching.
- 216. The method of claim 203, wherein the hint list size is further based at least in part on the size of one or more data objects identified on the hint list.
- 217. The method of claim 203, further comprising sending the hint list size to a client, wherein during a particular interval the client prefetches at most the number of objects allowed in the hint list size.
- 218. The method of claim 217, wherein the hint list size sent to a first client during an interval is different from the hint list size sent to a second client during the particular interval.
- 219. The method of claim 217, wherein the hint list size sent to at least one client during the particular interval is zero.
- 220. The method of claim 217, wherein the hint list size sent to a first client during the particular interval is zero, and wherein the hint list size sent to a second client during the particular interval is nonzero.
- 221. A system comprising:
a CPU in communication with a network; a memory coupled to the CPU, wherein the memory comprises program instructions executable to:
receive an indication of server congestion; receive a reference list; determine a hint list based at least in part on the reference list, wherein the hint list comprises one or more data objects recommended for prefetching, and determine a hint list size based at least in part on the indication of server congestion.
- 222. The system of claim 221, wherein determining the hint list based at least in part on the reference list comprises determining one or more data objects with a probability of a demand request greater than a threshold probability based at least in part on the reference list.
- 223. The system of claim 221, wherein the reference list comprises information regarding files previously requested by a client from which the reference list was received.
- 224. The system of claim 221, wherein determining the hint list based at least in part on the reference list comprises determining a probability of receiving a request for one or more data objects, and selecting one or more data objects having a relatively high probability of being requested for inclusion in the hint list.
- 225. The system of claim 221, wherein the program instructions are further executable to send the hint list to a client in communication with the network.
- 226. The system of claim 225, wherein the client to which the hint list is sent comprises a client that sent the reference list.
- 227. The system of claim 225, wherein the client to which the hint list is sent comprises a client other than a client that sent the reference list.
- 228. The system of claim 221, wherein the program instructions are further executable to send at least a first portion of the hint list to a client in communication with the network, and send at least a second portion of the hint list to the client if the first portion did not include the entire hint list.
- 229. The system of claim 221, wherein the program instructions are further executable to send the hint list size to a client in communication with the network, and wherein during a particular interval the client prefetches at most the number of objects allowed in the hint list size.
- 230. The system of claim 229, wherein the program instructions are further executable to send a hint list size to at least two client in communication with the network, and wherein the hint list size sent to a first client during a particular interval is different from the hint list size sent to a second client during the particular interval.
- 231. The system of claim 230, wherein the hint list size sent to at least one client during the particular interval is zero.
- 232. The system of claim 230, wherein the hint list size sent to a first client during the particular interval is zero, and wherein the hint list size sent to a second client during the particular interval is nonzero.
- 233. The system of claim 221, wherein the program instructions are further executable to send the hint list to a client that sent the reference list in an order that causes the client to request an inline object before a data object that refers to the inline object.
- 234. The system of claim 221, wherein receiving the indication of server congestion comprises receiving a recommended prefetch rate.
- 235. The system of claim 221, wherein receiving the indication of server congestion comprises receiving an estimate of server congestion based on round trip time of one or more demand requests.
- 236. The system of claim 221, wherein receiving the indication of server congestion comprises receiving a recommended hint list size.
- 237. The system of claim 221, wherein the hint list size comprises a number of data objects recommended for prefetching.
- 238. The system of claim 221, wherein the size of the hint list is further based at least in part on the size of one or more data objects identified on the hint list.
- 239. A carrier medium comprising program instructions, wherein the program instructions are computer-executable to implement a method comprising:
receiving an indication of network congestion; receiving a reference list; and determining a hint list based at least in part on the reference list, wherein the hint list comprises one or more data objects recommended for prefetching, and determining a hint list size based at least in part on the indication of network congestion.
- 240. The carrier medium of claim 239, wherein determining the hint list based at least in part on the reference list comprises determining one or more data objects with a probability of a demand request greater than a threshold probability based at least in part on the reference list.
- 241. The carrier medium of claim 239, wherein the reference list comprises information regarding files previously requested by a client from which the reference list was received.
- 242. The carrier medium of claim 239, wherein determining the hint list based at least in part on the reference list comprises determining a probability of receiving a request for one or more data objects, and selecting one or more data objects having a relatively high probability of being requested for inclusion in the hint list.
- 243. The carrier medium of claim 239, wherein the method further comprises sending the hint list to a client.
- 244. The carrier medium of claim 243, wherein the client to which the hint list is sent comprises a client that sent the reference list.
- 245. The carrier medium of claim 243, wherein the client to which the hint list is sent comprises a client other than a client that sent the reference list.
- 246. The carrier medium of claim 239, wherein the method further comprises sending at least a first portion of the hint list to a client in communication with the network, and sending at least a second portion of the hint list to the client if the first portion did not include the entire hint list.
- 247. The carrier medium of claim 239, wherein the method further comprises sending the hint list size to a client in communication with the network, wherein during a particular interval the client prefetches at most the number of objects allowed in the hint list size.
- 248. The carrier medium of claim 247, wherein the method further comprises sending a hint list size to at least two client in communication with the network, wherein the hint list size sent to a first client during a particular interval is different from the hint list size sent to a second client during the particular interval.
- 249. The carrier medium of claim 247, wherein the hint list size sent to at least one client during the particular interval is zero.
- 250. The carrier medium of claim 247, wherein the hint list size sent to a first client during the particular interval is zero, and wherein the hint list size sent to a second client during the particular interval is nonzero.
- 251. The carrier medium of claim 239, wherein the method further comprises sending the hint list to a client that sent the reference list in an order that causes an inline object to be prefetched before a data object that refers to the inline object.
- 252. The carrier medium of claim 239, wherein receiving the indication of network congestion comprises receiving a recommended prefetch rate.
- 253. The carrier medium of claim 239, wherein receiving the indication of network congestion comprises receiving an estimate of network congestion based on round trip time of one or more data packets.
- 254. The carrier medium of claim 239, wherein receiving the indication of network congestion comprises receiving a recommended hint list size.
- 255. The carrier medium of claim 239, wherein the size of the hint list comprises a number of data objects recommended for prefetching.
- 256. The carrier medium of claim 239, wherein the size of the hint list is further based at least in part on the size of one or more data objects identified on the hint list.
- 257. A method, comprising:
sending one or more requests for one or more data objects; receiving at least one data packet associated with one or more requested data objects; determining an estimate of server congestion based at least in part on a round trip time of at least one received data packet; and determining a prefetch rate appropriate for the estimated server congestion.
- 258. The method of claim 257, wherein sending one or more requests for one or more data objects comprises pinging a server.
- 259. The method of claim 257, wherein sending one or more requests for one or more data objects comprises sending two or more requests for one or more data objects, wherein the requests are sent within a specified period of time.
- 260. The method of claim 259, wherein the requests are distributed randomly within the specified period of time.
- 261. The method of claim 257, wherein determining the estimate of server congestion comprises providing an estimate of an uncongested service time for a request; and determining the estimate of server congestion based on the estimated uncongested service time and service time of the request.
- 262. The method of claim 261, wherein providing the estimate of the uncongested service time for a request comprises determining a minimum round trip time for a request.
- 263. The method of claim 261, wherein providing the estimate of the uncongested service time for a request comprises determining a minimum service time for a request over a defined period of time.
- 264. The method of claim 261, wherein providing the estimate of the uncongested service time for a request comprises determining a decaying running average of minimum service times for requests.
- 265. The method of claim 261, wherein providing the estimate of the uncongested service time for a request comprises determining a minimum service time for a request within a determined percentile of service times for requests sent.
- 266. The method of claim 257, wherein determining the estimate of server congestion comprises providing an estimate of an uncongested service time for a request; providing an estimate of a congested service time for a request; and
determining the estimate of server congestion based on the estimated uncongested service time, congested service time, and service time of the request.
- 267. The method of claim 266, wherein providing the estimate of the congested service time for a request comprises determining a maximum service time for a request.
- 268. The method of claim 266, wherein providing the estimate of the congested service time for a request comprises determining a maximum service time for a request over a defined period of time.
- 269. The method of claim 266, wherein providing the estimate of the congested service time for a request comprises determining a decaying running average of maximum service times for requests.
- 270. The method of claim 266, wherein providing the estimate of the congested service time for a request comprises determining a maximum service time for a request within a determined percentile of service times for requests sent.
- 271. The method of claim 257, wherein determining the prefetch rate appropriate for the estimated server congestion comprises determining whether more than a threshold number of requests received experienced significant server delays; and decreasing the prefetch rate if more than the threshold number of requests received experienced significant server delays.
- 272. The method of claim 257, wherein determining the prefetch rate appropriate for the estimated server congestion comprises determining whether fewer than a threshold number of requests received experienced significant server delays; and increasing the prefetch rate if fewer than the threshold number of requests received experienced significant server delays.
- 273. The method of claim 257, wherein determining the prefetch rate appropriate for the estimated server congestion comprises determining whether a previous change in the prefetch rate has had sufficient time to affect server congestion.
- 274. The method of claim 257, further comprising sending a signal comprising the determined prefetch rate.
- 275. A system comprising:
a CPU in communication with a network; a memory coupled to the CPU, wherein the memory comprises program instructions executable to: send one or more requests for one or more data objects; receive at least one data packet associated with one or more requested data objects; determine an estimate of server congestion based at least in part on a round trip time of at least one received data packet; and determine a prefetch rate appropriate for the estimated server congestion.
- 276. The system of claim 275, further comprising a server coupled to the network, wherein sending one or more requests for one or more data objects comprises pinging the server.
- 277. The system of claim 275, wherein sending one or more requests for one or more data objects comprises sending two or more requests for one or more data objects, and wherein the program instructions are executable to send the requests within a specified period of time.
- 278. The system of claim 277, wherein the program instructions are further executable to distribute the requests randomly within the specified period of time.
- 279. The system of claim 275, wherein determining the estimate of server congestion comprises providing an estimate of an uncongested service time for a request; and determining the estimate of server congestion based on the estimated uncongested service time and service time of the request.
- 280. The system of claim 279, wherein providing the estimate of the uncongested service time for a request comprises determining a minimum round trip time for a request.
- 281. The system of claim 279, wherein providing the estimate of the uncongested service time for a request comprises determining a minimum service time for a request over a defined period of time.
- 282. The system of claim 279, wherein providing the estimate of the uncongested service time for a request comprises determining a decaying running average of minimum service times for requests.
- 283. The system of claim 279, wherein providing the estimate of the uncongested service time for a request comprises determining a minimum service time for a request within a determined percentile of service times for requests sent.
- 284. The system of claim 275, wherein determining the estimate of server congestion comprises providing an estimate of an uncongested service time for a request; providing an estimate of a congested service time for a request; and determining the estimate of server congestion based on the estimated uncongested service time, congested service time, and service time of the request.
- 285. The system of claim 284, wherein providing the estimate of the congested service time for a request comprises determining a maximum service time for a request.
- 286. The system of claim 284, wherein providing the estimate of the congested service time for a request comprises determining a maximum service time for a request over a defined period of time.
- 287. The system of claim 284, wherein providing the estimate of the congested service time for a request comprises determining a decaying running average of maximum service times for requests.
- 288. The system of claim 284, wherein providing the estimate of the congested service time for a request comprises determining a maximum service time for a request within a determined percentile of service times for requests sent.
- 289. The system of claim 275, wherein determining the prefetch rate appropriate for the estimated server congestion comprises determining whether more than a threshold number of requests received experienced significant server delays; and decreasing the prefetch rate if more than the threshold number of requests received experienced significant server delays.
- 290. The system of claim 275, wherein determining the prefetch rate appropriate for the estimated server congestion comprises determining whether fewer than a threshold number of requests received experienced significant server delays; and increasing the prefetch rate if fewer than the threshold number of requests received experienced significant server delays.
- 291. The system of claim 275, wherein determining the prefetch rate appropriate for the estimated server congestion comprises determining whether a previous change in the prefetch rate has had sufficient time to affect server congestion.
- 292. The system of claim 275, wherein the program instructions are further executable to send a signal comprising the determined prefetch rate.
- 293. A carrier medium comprising program instructions, wherein the program instructions are computer-executable to implement a method comprising:
sending one or more requests for one or more data objects; receiving at least one data packet associated with one or more requested data objects; determining an estimate of server congestion based at least in part on a round trip time of at least one received data packet; and determining a prefetch rate appropriate for the estimated server congestion.
- 294. The carrier medium of claim 293, wherein sending one or more requests for one or more data objects comprises pinging a server.
- 295. The carrier medium of claim 293, wherein sending one or more requests for one or more data objects comprises sending two or more requests for one or more data objects, and wherein the program instructions are further executable to send the requests within a specified period of time.
- 296. The carrier medium of claim 295, wherein the program instructions are further executable to distribute the requests randomly within the specified period of time.
- 297. The carrier medium of claim 293, wherein determining an estimate of server congestion comprises providing an estimate of an uncongested service time for a request; and determining the estimate of server congestion based on the estimated uncongested service time and service time of the request.
- 298. The carrier medium of claim 297, wherein providing the estimate of the uncongested service time for a request comprises determining a minimum round trip time for a request.
- 299. The carrier medium of claim 297, wherein providing the estimate of the uncongested service time for a request comprises determining a minimum service time for a request over a defined period of time.
- 300. The carrier medium of claim 297, wherein providing the estimate of the uncongested service time for a request comprises determining a decaying running average of minimum service times for requests.
- 301. The carrier medium of claim 297, wherein providing the estimate of the uncongested service time for a request comprises determining a minimum service time for a request within a determined percentile of service times for requests sent.
- 302. The carrier medium of claim 293, wherein determining the estimate of server congestion comprises providing an estimate of an uncongested service time for a request; providing an estimate of a congested service time for a request; and determining the estimate of server congestion based on the estimated uncongested service time, congested service time, and service time of the request.
- 303. The carrier medium of claim 302, wherein providing the estimate of the congested service time for a request comprises determining a maximum service time for a request.
- 304. The carrier medium of claim 302, wherein providing the estimate of the congested service time for a request comprises determining a maximum service time for a request over a defined period of time.
- 305. The carrier medium of claim 302, wherein providing the estimate of the congested service time for a request comprises determining a decaying running average of maximum service times for requests.
- 306. The carrier medium of claim 302, wherein providing the estimate of the congested service time for a request comprises determining a maximum service time for a request within a determined percentile of service times for requests sent.
- 307. The carrier medium of claim 293, wherein determining the prefetch rate appropriate for the estimated server congestion comprises determining whether more than a threshold number of requests received experienced significant server delays; and decreasing the prefetch rate if more than the threshold number of requests received experienced significant server delays.
- 308. The carrier medium of claim 293, wherein determining the prefetch rate appropriate for the estimated server congestion comprises determining whether fewer than a threshold number of requests received experienced significant server delays; and increasing the prefetch rate if fewer than the threshold number of requests received experienced significant server delays.
- 309. The carrier medium of claim 293, wherein determining the prefetch rate appropriate for the estimated server congestion comprises determining whether a previous change in the prefetch rate has had sufficient time to affect server congestion.
- 310. The carrier medium of claim 293, wherein the program instructions are further executable to send a signal comprising the determined prefetch rate.
- 311. A method, comprising:
receiving a request for one or more data objects; determining whether the request comprises a demand request or a prefetch request; and returning a redirection data object corresponding to one or more requested data objects if the request comprises a prefetch request.
- 312. The method of claim 311, further comprising routing the request to a demand server if the request comprises a demand request.
- 313. The method of claim 311, wherein the redirection data object causes a request to be sent to a prefetch server.
- 314. A system, comprising:
a CPU in communication with a network; a memory coupled to the CPU, wherein the memory comprises program instructions executable to:
receive a request for one or more data objects; determine whether the request comprises a demand request or a prefetch request; and returning a redirection data object corresponding to one or more requested data objects if the request comprises a prefetch request.
- 315. The system of claim 314, wherein the program instructions are further executable to route the request to a demand server if the request comprises a demand request.
- 316. The system of claim 314, wherein the redirection data object causes a request to be sent to a prefetch server.
- 317. A carrier medium comprising program instructions, wherein the program instructions are computer-executable to implement a method comprising:
receiving a request for one or more data objects; determining whether the request comprises a demand request or a prefetch request; and returning a redirection data object corresponding to one or more requested data objects if the request comprises a prefetch request.
- 318. The carrier medium of claim 317, wherein the program instructions are further computer-executable to implement routing the request to a demand server if the request comprises a demand request.
- 319. The carrier medium of claim 317, wherein the redirection data object causes a request to be sent to a prefetch server.
PRIORITY CLAIM
[0001] This application claims the benefit of the U.S. Provisional Patent Application serial No. 60/398,488 entitled “METHOD AND SYSTEM FOR BACKGROUND REPLICATION OF DATA OBJECTS,” to Michael D. Dahlin, Arunkumar Venkataramani and Ravindranath Kokku and filed Jul. 25, 2002.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60398488 |
Jul 2002 |
US |