Claims
- 1. A method of programmatically distributing workload across resources of a server, comprising steps of:
determining a number of available thread pools; obtaining execution times from historical statistics of a workload on the server; and programmatically distributing the obtained execution times over the number of available thread pools.
- 2. The method according to claim 1, wherein the programmatically distributing step further comprises the steps of:
sorting the execution times; and allocating the sorted execution times over the number of available thread pools.
- 3. The method according to claim 2, wherein the sorted execution times are allocated evenly over the number of available thread pools.
- 4. The method according to claim 2, further comprising the step of determining a count of the sorted execution times, and wherein the allocating step further comprises the steps of:
dividing the count of sorted execution times by the number of available thread pools to find a value, “N”; and assigning upper bounds on execution times for each of the available thread pools, according to the sorted execution times when accessed using integer multiples of “N” as an index.
- 5. The method according to claim 1, further comprising the steps of:
receiving at the server, at run-time, inbound requests; and assigning the inbound requests to the available thread pools according to the programmatically-distributed execution times.
- 6. The method according to claim 5, further comprising the steps of:
tracking execution time of the inbound requests as they execute at the server; and revising the execution times of the workload on the server to reflect the tracked execution times.
- 7. The method according to claim 6, further comprising the step of periodically recomputing the programmatic distribution to reflect the revised execution times.
- 8. The method according to claim 1, wherein the sorted execution times are moving average execution times.
- 9. The method according to claim 1, wherein the execution times are maintained per request type.
- 10. The method according to claim 1, wherein the execution times are maintained per request type and parameter value.
- 11. The method according to claim 1, wherein the execution times are maintained per method name.
- 12. The method according to claim 1, wherein the execution times are maintained per method name and parameter values.
- 13. The method according to claim 1, wherein the execution times are maintained per method name and parameter names and values.
- 14. The method according to claim 1, wherein the execution times are maintained per Uniform Resource Identifier (“URI”) name and parameter values.
- 15. The method according to claim 1, wherein the execution times are maintained per processing destination.
- 16. The method according to claim 4, further comprising the steps of:
receiving an inbound request at the server; determining a classification key of the received request; locating an average execution time for the received request, using the determined classification key; and locating a particular available thread pool where the received request will be executed by iteratively comparing the located average execution time to each of the assigned upper bounds until the compared-to assigned upper bound is greater than or equal to the located average execution time.
- 17. The method according to claim 1, wherein the thread pools are logical thread pools.
- 18. A system for programmatically distributing inbound requests across thread pools in a multithreaded server, comprising:
means for determining a number of available thread pools; means for obtaining execution times from historical statistics of a workload on the server; means for programmatically distributing the obtained execution times over the number of available thread pools; means for receiving at the server, at run-time, inbound requests; and means for assigning the inbound requests to the available thread pools according to the programmatically-distributed execution times.
- 19. The system according to claim 18, wherein the means for programmatically distributing further comprises:
means for sorting the execution times; and means for allocating the sorted execution times over the number of available thread pools.
- 20. The system according to claim 19, further comprising means for determining a count of the sorted execution times, and wherein the means for allocating further comprises:
means for dividing the count of sorted execution times by the number of available thread pools to find a value, “N”; and means for assigning upper bounds on execution times for each of the available thread pools, according to the sorted execution times when accessed using integer multiples of “N” as an index.
- 21. The system according to claim 18, further comprising:
means for tracking execution time of the inbound requests as they execute at the server; and means for revising the execution times of the workload on the server to reflect the tracked execution times.
- 22. The system according to claim 21, further comprising means for periodically recomputing the programmatic distribution to reflect the revised execution times.
- 23. The system according to claim 18, wherein the sorted execution times are moving average execution times.
- 24. The system according to claim 18, wherein the execution times are maintained per request type.
- 25. The system according to claim 18, wherein the execution times are maintained per method name.
- 26. The system according to claim 18, wherein the execution times are maintained per Uniform Resource Identifier (“URI”) name.
- 27. The system according to claim 18, wherein the execution times are maintained per processing destination.
- 28. The system according to claim 20, wherein the means for assigning upper bounds further comprises:
means for determining a classification key of the received request; means for locating an average execution time for the received request, using the determined classification key; and means for locating a particular available thread pool where the received request will be executed by iteratively comparing the located average execution time to each of the assigned upper bounds until the compared-to assigned upper bound is greater than or equal to the located average execution time.
- 29. A computer program product for programmatically distributing workload across resources of a server, the computer program product embodied on one or more computer readable media readable by a computing system in a computing environment and comprising:
computer-readable program code means for determining a number of available thread pools; computer-readable program code means for obtaining execution times from historical statistics of a workload on the server; and computer-readable program code means for programmatically distributing the obtained execution times over the number of available thread pools.
- 30. The computer program product according to claim 29, wherein the computer-readable program code means for programmatically distributing further comprises:
computer-readable program code means for sorting the execution times; and computer-readable program code means for allocating the sorted execution times evenly over the number of available thread pools.
- 31. The computer program product according to claim 30, further comprising computer-readable program code means for determining a count of the sorted execution times, and wherein the computer-readable program code means for allocating further comprises:
computer-readable program code means for dividing the count of sorted execution times by the number of available thread pools to find a value, “N”; and computer-readable program code means for assigning upper bounds on execution times for each of the available thread pools, according to the sorted execution times when accessed using integer multiples of “N” as an index.
- 32. The computer program product according to claim 29, further comprising:
computer-readable program code means for receiving at the server, at run-time, inbound requests; and computer-readable program code means for assigning the inbound requests to the available thread pools according to the programmatically-distributed execution times.
- 33. The computer program product according to claim 34, further comprising:
computer-readable program code means for tracking execution time of the inbound requests as they execute at the server; and computer-readable program code means for revising the execution times of the workload on the server to reflect the tracked execution times.
- 34. The computer program product according to claim 35, further comprising the step of periodically recomputing the programmatic distribution to reflect the revised execution times.
- 35. The computer program product according to claim 29, wherein the execution times are maintained per request type, parameter names, and parameter values.
- 36. The computer program product according to claim 29, wherein the execution times are maintained per method name and parameter values.
- 37. The computer program product according to claim 29, wherein the execution times are maintained per Uniform Resource Identifier (“URI”) name and parameter values.
- 38. The computer program product according to claim 29, wherein the execution times are maintained per processing destination.
- 39. The computer program product according to claim 31, further comprising:
computer-readable program code means for receiving an inbound request at the server; computer-readable program code means for determining a classification key of the received request; computer-readable program code means for locating an average execution time for the received request, using the determined classification key; and computer-readable program code means for locating a particular available thread pool where the received request will be executed by iteratively comparing the located average execution time to each of the assigned upper bounds until the compared-to assigned upper bound is greater than or equal to the located average execution time.
- 40. A method of doing business by programmatically distributing workload across resources of a server, comprising steps of:
programmatically monitoring operational characteristics of a workload at a server; programmatically distributing the workload across resources of the server, further comprising the steps of:
determining a number of available thread pools; obtaining execution times from historical statistics of the workload; programmatically distributing the obtained execution times over the number of available thread pools; receiving at the server, at run-time, inbound requests; and assigning the inbound requests to the available thread pools according to the programmatically-distributed execution times; and charging a fee for carrying out the programmatically monitoring and programmatically distributing steps.
RELATED INVENTION
[0001] The present invention is related to commonly-assigned U.S. Pat. No. ______ (Ser. No. ______, filed concurrently herewith), which is entitled “Dynamic Thread Pool Tuning Techniques”, and which is hereby incorporated herein by reference.