This application is a national stage entry under 35 U.S.C. § 371 of International Patent Application No. PCT/US2018/042233, filed on Jul. 16, 2018, and designating the United States, the entire contents of which are incorporated herein by reference.
The background description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure.
Cache memory is used to provide fast access to data in a computer system. When the requested data is in the cache memory, time consuming requests to slower memory may be avoided. When requested data is not in the cache, a ‘miss’ is returned and the computer system must access the requested data from the slower memory system. Two factors affect cache hit and miss rates. One is predicting what data to keep stored in the cache as cache memory limits are reached. Multiple algorithms exist for determining which data to keep and discard. A second factor is the size of the cache. When more memory is allocated, more data can be stored in the cache and the miss rate is lowered. Too little cache memory increases the miss rate and may negatively affect system performance. However, system constraints may prevent simply provisioning an amount of cache memory that would guarantee maximum hit rates. There is a need to correctly allocate cache memory.
A multi-tenant system is a system in which multiple programs share an execution environment, such as a cloud processing facility. A system cache may be shared among the multiple programs. Prior art systems simply monitor cache miss rates and increase memory to a cache that has reached a threshold of misses. A machine learning system observes memory access operations of each program and dynamically reallocates cache memory by predicting cache performance before cache miss rates increase to an unacceptable level. Rather than simply monitoring hit and miss rates, the prediction system monitors memory access requests in order to recognize access request patterns that are predictive of cache performance. Cache allocated to one tenant may be reallocated to another tenant when conditions allow. When no tenant has available cache, the total cache memory may be increased. This has the two-fold effect of ensuring that each tenant (application) has sufficient cache for operating to its service level agreement (SLA) and that the system owner does not contract for more cache memory than is required to service all tenants.
The figures depict a preferred embodiment for purposes of illustration only. One skilled in the art may readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein.
Cache memory may be used to improve performance in a computing environment by storing frequently or recently used data in memory that is faster and/or more accessible than a computing systems main storage facility. For example, a cache memory may be a solid state memory while the main storage may use rotating media. In conventional server systems cache memory may simply be a fixed size and data in the cache may be saved on a “least frequently used” basis. That is, data are categorized by how often each data is accessed. Those that are accessed less frequently are deleted while more frequently accessed data may be kept in the cache. Should the recently deleted data be requested, the cache memory request will fail and that data will need to be retrieved from the main storage. There are numerous technologies associated with predicting what data to keep in a cache memory. Those technologies attempt to manage system performance through keeping the most valuable data in a fixed-size cache. Cache memory size adjustments may be made manually, for example, on a daily or weekly basis.
In large systems, such as online processing systems or web server systems, the ability to keep relevant data in cache may affect system performance to the extent that required system service levels can only be achieved by ensuring a high percentage of cache hits. However, as in any system, there are constraints on cache memory. In the simplest terms, it is generally very expensive to provision large amounts of cache memory. The system architecture of a high performance system must balance the need for ample cache memory against the high cost of such memory.
Cloud computing is an example of architectures where cache memory is readily available in variable sizes. Enterprises that use a cloud computing architecture may host multiple systems in the cloud environment in a so-called multi-tenant computing environment. Of course, in implementation cloud computing environments are simply server architectures configured and managed so that third parties can host applications and services without the need to build out their own computing facilities. Among the features of cloud computing is a pay-as-you-go model where operations may increase and reduce the computing power as needed. For example, companies that support online shopping may see 10 times or even hundreds of times increases in computing resource requirements from a normal day to black Friday or cyber Monday. Similar usage peaks may occur for news providers, gaming servers, airlines, etc., for events ranging from a world crisis to weather-related airport closures. Service level agreements may be in place that require certain performance levels under all conditions further compounding the problems faced by system architects in designing and deploying high scale computing systems.
Cache memory size has a direct influence on system performance. However, the cost variation from a small test environment cache to a premium tier cache supporting tens of thousands of connections may range from several cents per hour to several dollars per hour or more. For systems requiring hundreds of thousands of connections with high guaranteed service levels, simply provisioning a cloud environment for the maximum expected load may be an economic challenge. When an enterprise hosts multiple applications in a cloud environment, the enterprise may take advantage of economies of scale by sharing cache memory among applications, or tenants. With such multi-tenant systems comes the problem of provisioning each tenant with the enough of cache memory to achieve the desired service levels without over-building the system.
The processing services 106 may support web services, databases, etc. and may include load balancing, redundancy, backups, and other system support services that optimize performance of the servers and systems supporting the operation of the system owner systems. The actual architecture of the system may include single or multiple servers both collocated or geographically separated. As described in more detail below, a dynamic cache management system 108 may predict cache memory performance and manage dynamic allocation of memory among the tenants of one system owner, or even among tenants of various system owners depending on configuration.
A network 128 may connect the cloud service to a number of client applications 110, 112, 126. In this illustration, system owner 1 is supporting two active clients, one for tenant 1110 and one for tenant 2112 and system owner 2 is supporting 1 client 126. In practice, each the cache 116, 118, 120, 122 for each tenant may be supporting tens or hundreds of thousands of clients, or even more in some large scale operations such as search engines, transaction processing, gaming, etc.
In the illustration of
In contrast,
The dynamic cache memory manager 108 may include a profiler 220, a pattern classifier 222, a predictor 224, and an optimizer 226. The optimizer 226 may include a request module 228 that interacts with the cloud system 102 or more specifically the memory system 206 to request changes to the overall size of a system owner's cache memory 114.
The profiler 220 may capture request data for each tenant 202 being managed and may collect data over various intervals on a fixed time basis, such as a one second, or on at a variable interval based, for example, on number of samples collected. Other sampling rates or sampling patterns may be used. The pattern classifier 222 may receive the sample data from the profiler 220 and apply any of several algorithms to determine the nature of the sample. In an embodiment, a Kolmogorov-Smirnov (KS) test may categorize the sample into one of several distributions. Such distributions may include a uniform distribution, a Gaussian distribution, an exponential distribution, and a Zipf (or zeta) distribution. The distribution of samples and the change from one distribution to another may be predictive of cache performance.
The distribution data, and in some cases, additional information that may include sample size, sample duration, distribution statistics, etc., may be passed to the predictor 224. The predictor 224 may be a fully connected neural network (FCN), in one embodiment. Turning briefly to
Returning to
The output of the predictor 224, for example, a predicted service level may be used by the optimizer 226 to adjust the cache memory size for a particular tenant 202, as illustrated in
In the illustrated embodiment, each tenant is managed independently so that all size changes are made without regard to the memory needs of any other client or the current overall cache memory size. However, in one case, the system owner may contract for a certain fixed amount of total cache memory space 114. When in due course one tenant's needs are low and its cache size may be reduced, another tenant with increasing cache memory needs may occupy that space. In many cases, however, as processing needs tend to grow rather than reduce, it many cases an increase in any tenant's cache memory may require increasing the size of the overall cache memory 114 by making such a request to the cloud system memory manager 208.
At block 304, the request for data may also be directed to the dynamic cache management system 108, and as discussed above, the request may be combined with other requests from the tenant and be classified using, for example, the pattern classifier 222 of
At block 310, an optimizer 226 may determine if a distribution in request patterns has changed. If not, no memory configuration change is made at block 312 and execution returns to block 302. If so, execution may continue at block 314, where a determination is made if the predicted performance is above or below the current performance. If the predicted performance is above the SLA requirement, at block 316, the cache size for that tenant may be reduced.
If the predicted performance is below the SLA requirement, or another related performance level, execution of the method 300 may continue at block 318 and the optimizer may determine how much additional cache memory 116 that tenant may be required so that performance remains at above the SLA. After determining the cache size required, the optimizer 226 may determine, at block 320 whether there is sufficient free cache memory space in the currently allocated system owner memory 114 to accommodate the increased memory size. If so, at block 324, memory may be allocated to the tenant.
If not, that is there is not sufficient memory to accommodate the tenant's need, at block 322, the request module 228 may send a signal to a memory manager 208 or the like, to increase the total capacity of the system owner's cache 114 and then allocate the needed memory to the tenant's space 116. Execution may continue at block 302 from either blocks 322 or 324 and the process repeated. In an embodiment, each tenant may have a dynamic cache management system 108 while in other embodiments, the system 108 may be shared by multiple tenants.
The technical effect of the dynamic cache management system 108 is real time changes to cache memory size to achieve improved system performance by predicting future performance using a trained neural network. Optimized cache memory size benefits system operators by ensuring that performance levels are met without over-building the cache memory infrastructure. System users/tenants benefit from systems that improve guaranteed service level availability, especially during high volume periods. System users/tenants also benefit by receiving this higher service level at a minimum cost due to on-demand system reconfiguration.
The figures depict preferred embodiments for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein
Upon reading this disclosure, those of skill in the art will appreciate still additional alternative structural and functional designs for the systems and methods described herein through the disclosed principles herein. Thus, while particular embodiments and applications have been illustrated and described, it is to be understood that the disclosed embodiments are not limited to the precise construction and components disclosed herein. Various modifications, changes and variations, which will be apparent to those skilled in the art, may be made in the arrangement, operation and details of the systems and methods disclosed herein without departing from the spirit and scope defined in any appended claims.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2018/042233 | 7/16/2018 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2020/018062 | 1/23/2020 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
7188125 | Karr | Mar 2007 | B1 |
7648060 | DeBusk | Jan 2010 | B2 |
20120041914 | Tirunagari | Feb 2012 | A1 |
20120284132 | Kim | Nov 2012 | A1 |
20120310760 | Phillips | Dec 2012 | A1 |
20130013499 | Kalgi | Jan 2013 | A1 |
20130138891 | Chockler | May 2013 | A1 |
20150039462 | Shastry | Feb 2015 | A1 |
20160364419 | Stanton | Dec 2016 | A1 |
20170017576 | Cammarota | Jan 2017 | A1 |
Number | Date | Country |
---|---|---|
3109771 | Dec 2016 | EP |
3109771 | Dec 2016 | EP |
Entry |
---|
International Search Report and Written Opinion of International Application No. PCT/US2018/042233, dated Nov. 19, 2018, 14 pages. |
Number | Date | Country | |
---|---|---|---|
20210326255 A1 | Oct 2021 | US |