Emerging network trends in data centers and cloud systems place increasing performance demands on a system. The increasing demands can cause an increase of the use of resources in the system. The resources have a finite capability and access and sharing of the resources needs to be managed.
To provide a more complete understanding of the present disclosure and features and advantages thereof, reference is made to the following description, taken in conjunction with the accompanying figures, wherein like reference numerals represent like parts, in which:
The FIGURES of the drawings are not necessarily drawn to scale, as their dimensions can be varied considerably without departing from the scope of the present disclosure.
The following detailed description sets forth examples of apparatuses, methods, and systems relating to a system for enabling, resource allocation in accordance with an embodiment of the present disclosure. Features such as structure(s), function(s), and/or characteristic(s), for example, are described with reference to one embodiment as a matter of convenience; various embodiments may be implemented with any suitable one or more of the described features.
In the following description, various aspects of the illustrative implementations will be described using terms commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. However, it will be apparent to those skilled in the art that the embodiments disclosed herein may be practiced with only some of the described aspects. For purposes of explanation, specific numbers, materials and configurations are set forth in order to provide a thorough understanding of the illustrative implementations. However, it will be apparent to one skilled in the art that the embodiments disclosed herein may be practiced without the specific details. In other instances, well-known features are omitted or simplified in order not to obscure the illustrative implementations.
In the following detailed description, reference is made to the accompanying drawings that form a part hereof wherein like numerals designate like parts throughout, and in which is shown, by way of illustration, embodiments that may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense. For the purposes of the present disclosure, the phrase “A and/or B” means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase “A, B, and/or C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B, and C).
Electronic device 102a can include a dynamic resources engine 106, memory 108, computer processing unit (CPU) 110, a plurality of resources 112a and 112b, a plurality of components 114a and 114b, and one or more applications 116a and 116b. Dynamic resources engine 106 can include a resource allocation engine 122 and a resource partitioning engine 124. Memory 108 can include a resource allocation table 120. CPU 110 can include one or more CPU resources 126a and 126b. In an example, CPU resource 126a may be a cache and CPU resource 126b may be memory bandwidth. Each CPU resource 126a and 126b can be divided into a plurality of partitions. For example, CPU resource 126a can be divided into a plurality of CPU partitions 130a-130d. CPU resource 126b can be divided into a plurality of CPU partitions 130e-130i. Each of resources 112a and 112b can also be divided into a plurality of resources partitions. For example, resource 112a can be divided into resources partitions 132a-132d and resources 112b can be divided into resource partitions 132e-132g.
Each of resources 112a and 112b can be a cache, memory, storage, power, host platform resource, accelerator resource, FPGA resource, PCH resource, or some other resource that may be used by a component (e.g., component 114a) or an application (e.g., application 116a). Each of CPU resources 126a and 126b can be a cache, memory bandwidth, processing thread, CPU core, or some other CPU resource that may be used by a component (e.g., component 114a) or application (e.g., application 116a).
Each component 114a and 114b may be a critical component such as a Host Linux OS, networking stack/IO, virtual switch, hypervisor, management agent, SDN agent, authentication, authorization, and accounting (AAA) component, etc. The term “critical component” includes components (both real and virtual) that are critical or necessary to the execution of an application or process. Each application 116a and 116b may be a, process, function, network virtual function (NVF), etc.
Electronic device 102b can include one or more applications 116c and 116d. Each of electronic devices 102b and 102c can include similar elements (e.g., dynamic resources engine 106, memory 108, CPU 110, plurality of resources 112a and 112b, plurality of components 114a and 114b, one or more applications 116a and 116b, etc.) as electronic device 102a. In an example, one or more electronic devices 102e-120g and cloud services 118 may be in communication with network 104.
In an example, using resource partitioning engine 124, dynamic resources engine 106 can be configured to divide CPU resources (e.g., CPU resource 126a and 126b) and other resources (e.g., resources 112a and 112b) in partitions (e.g., CPU partitions 130a-130d, resources partitions 132a-132d, etc.). Using resource allocation engine 122, dynamic resources engine 106 can assign a particular partition to a component (e.g., component 114a) or an application (e.g., application 116a). Each partition can include a reserved portion and a burst portion. The reserved portion of the resource is a guaranteed region of the resource that is specifically allocated for the component or application. The burst portion of the resource is also a guaranteed amount of the resource specifically allocated for the component or application but if the component or application is not using the allocated burst portion, another component or application can use the burst portion. Dynamic resources engine 106 can enforce the use of the reserved portion and the burst portion.
This allows resources to be allocated for components and applications and for a portion of the allocated resource to be available for other components and applications. More specifically, the reserved portion of the resource helps to ensure that a component or application will always get a guaranteed amount of the resource. The burst portion can help ensure the component or application will have guaranteed additional availability of the resource if needed and when the burst portion is not used by the component or application, the unused burst portion can be used by other components or applications.
In an example, an unused burst portion can be allocated on a first-come-first serve, basis, a round robin basis, a hierarchical basis where one component or application take priority over another component or application, or some other means for allocating the unused burst portion that can help ensure a single component or application is prevented from starving or swamping resources used by other components or applications.
It is to be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present disclosure. Substantial flexibility is provided by system 100 in that any suitable arrangements and configuration may be provided without departing from the teachings of the present disclosure. Elements of
For purposes of illustrating certain example techniques of system 100, the following foundational information may be viewed as a basis from which the present disclosure may be properly explained. End users have more media and communications choices than ever before. A number of prominent technological trends are currently afoot (e.g., more computing devices, more online video services, more Internet traffic), and these trends are changing the media delivery landscape. Data centers serve a large fraction of the Internet content today, including web objects (text, graphics, Uniform Resource Locators (URLs) and scripts), downloadable objects (media files, software, documents), applications (e-commerce, portals), live streaming media, on demand streaming media, and social networks. In addition, devices and systems, such as data centers, are expected to increase performance and function. However, the increase in performance and/or function can cause bottlenecks within the resources of the data center and electronic devices in the data center.
Previous solutions include hardcoding cache and memory bandwidth resources among applications and virtual network functions (VNFs). In an example, upon deployment of a VNF, cache and memory bandwidth may be hardcoded for the VNF. This can present problems because sometimes the hardcoded cache and memory bandwidth is not enough. Other times the hardcoded cache and memory bandwidth is too much and may deprive other applications from using unused hardcoded cache and memory bandwidth. What is needed is a system, method, apparatus, etc. to allow for allocation of resources on the system and can help to prevent a single component or application from starving or swamping resources used by other components or applications.
A device to help with the allocation of resources of a system, as outlined in
Turning to the infrastructure of
In system 100, network traffic, which is inclusive of packets, frames, signals, data, etc., can be sent and received according to any suitable communication messaging protocols. Suitable communication messaging protocols can include a multi-layered scheme such as Open Systems Interconnection (OSI) model, or any derivations or variants thereof (e.g., Transmission Control Protocol/Internet Protocol (TCP/IP), user datagram protocol/IP (UDP/IP)). Messages through the network could be made in accordance with various network protocols, (e.g., Ethernet, Infiniband, OmniPath, etc.). Additionally, radio signal communications over a cellular network may also be provided in system 100. Suitable interfaces and infrastructure may be provided to enable communication with the cellular network.
The term “packet” as used herein, refers to a unit of data that can be routed between a source node and a destination node on a packet switched network. A packet includes a source network address and a destination network address. These network addresses can be Internet Protocol (IP) addresses in a TCP/IP messaging protocol. The term “data” as used herein, refers to any type of binary, numeric, voice, video, textual, or script data, or any type of source or object code, or any other suitable information in any appropriate format that may be communicated from one point to another in electronic devices and/or networks. Additionally, messages, requests, responses, and queries are forms of network traffic, and therefore, may comprise packets, frames, signals, data, etc.
In an example implementation, electronic devices 102a-102d, are meant to encompass network elements, network appliances, servers, routers, switches, gateways, bridges, load balancers, processors, modules, or any other suitable device, component, element, or object operable to exchange information in a network environment. Electronic devices 102a-102d may include any suitable hardware, software, components, modules, or objects that facilitate the operations thereof, as well as suitable interfaces for receiving, transmitting, and/or otherwise communicating data or information in a network environment. This may be inclusive of appropriate algorithms and communication protocols that allow for the effective exchange of data or information. Each of electronic devices 102a-102d may be virtual or include virtual elements.
In regards to the internal structure associated with system 100, each of electronic devices 102a-102d can include memory elements for storing information to be used in the operations outlined herein. Each of electronic devices 102a-102d may keep information in any suitable memory element (e.g., random access memory (RAM), read-only memory (ROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), application specific integrated circuit (ASIC), etc.), software, hardware, firmware, or in any other suitable component, device, element, or object where appropriate and based on particular needs. Any of the memory items discussed herein should be construed as being encompassed within the broad term ‘memory element.’ Moreover, the information being used, tracked, sent, or received in system 100 could be provided in any database, register, queue, table, cache, control list, or other storage structure, all of which can be referenced at any suitable timeframe. Any such storage options may also be included within the broad term ‘memory element’ as used herein.
In certain example implementations, the functions outlined herein may be implemented by logic encoded in one or more tangible media (e.g., embedded logic provided in an ASIC, digital signal processor (DSP) instructions, software (potentially inclusive of object code and source code) to be executed by a processor, or other similar machine, etc.), which may be inclusive of non-transitory computer-readable media. In some of these instances, memory elements can store data used for the operations described herein. This includes the memory elements being able to store software, logic, code, or processor instructions that are executed to carry out the activities described herein.
In an example implementation, elements of system 100, such as electronic devices 102a-102d may include software modules (e.g., dynamic resources engine 106, resource allocation engine 122, resource partitioning engine 124, etc.) to achieve, or to foster, operations as outlined herein. These modules may be suitably combined in any appropriate manner, which may be based on particular configuration and/or provisioning needs. In example embodiments, such operations may be carried out by hardware, implemented externally to these elements, or included in some other network device to achieve the intended functionality. Furthermore, the modules can be implemented as software, hardware, firmware, or any suitable combination thereof. These elements may also include software (or reciprocating software) that can coordinate with other network elements in order to achieve the operations, as outlined herein.
Additionally, each of electronic devices 102a-102d may include a processor that can execute software or an algorithm to perform activities as discussed herein. A processor can execute any type of instructions associated with the data to achieve the operations detailed herein. In one example, the processors could transform an element or an article (e.g., data) from one state or thing to another state or thing. In another example, the activities outlined herein may be implemented with fixed logic or programmable logic (e.g., software/computer instructions executed by a processor) and the elements identified herein could be some type of a programmable processor, programmable digital logic (e.g., a field programmable gate array (FPGA), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM)) or an ASIC that includes digital logic, software, code, electronic instructions, or any suitable combination thereof. Any of the potential processing elements, modules, and machines described herein should be construed as being encompassed within the broad term ‘processor.’
Turning to
For example, as illustrated in
Turning to
As illustrated in
Turning to
Turning to
Each reserved column indicates how a particular partition of a resource is allocated to a component or application. For example, with reference to resource R1 (e.g., CPU resource 126a), C1 (e.g., component 114a) may be allocated a 0.1 (e.g., 10 percent (%)) reserve of resource R1 while A1 (e.g., application 116a) may be allocated a 0.20 (e.g., 20%) reserve of resource R1. Also, C1 may be allocated a 0.05 (e.g., 5%) burst of resource R1 while A1 may be allocated a 0.04 (e.g., 4%) burst of resource R1.
In an example, each burst column (e.g., burst C1 column 146, burst A1 column 150, and burst A2 column 154) can include data that indicates how the burst portion of the partition of the resource is to be shared. More specifically, with respect to resource R2, burst C1 column 146 indicates that the burst portion can be shared by C2 (e.g., component 114b), A1 (e.g., application 116a), and A2 (e.g., application 116b). C2, A1, and A2 may share the burst portion of resource R2 allocated to C1 on a first come first serve basis, a round robin basis, an equally shared basis, or some other method that allows C2, A1, and A2 to share the burst portion of resource R2 allocated to C1. In another specific example, with respect to resource R1, burst A1 column 150 indicates that the burst portion can be shared by C1 and A2 but C1 and A2 can be weighted such that one has priority over the other. For example, as illustrated in
Turning to
Turning to
Turning to
If the predetermined condition did occur, then the system determines if the reserved portion and/or the burst portion were under or over utilized, as in 806. If the reserved portion and/or the burst portion were under or over utilized, then the under or over utilized reserved portion and/or burst portion of the resource are reallocated, as in 808. If the reserved portion and/or the burst portion were not under or over utilized, then the process ends. In an example, the predetermined condition can be reset and the process can start again at 802 where the reserved portion and the burst portion are monitored.
Turning to
Turning to
It is also important to note that the operations in the preceding flow diagrams (i.e.,
Although the present disclosure has been described in detail with reference to particular arrangements and configurations, these example configurations and arrangements may be changed significantly without departing from the scope of the present disclosure. Moreover, certain components may be combined, separated, eliminated, or added based on particular needs and implementations. Additionally, although system 100 have been illustrated with reference to particular elements and operations that facilitate the communication process, these elements and operations may be replaced by any suitable architecture, protocols, and/or processes that achieve the intended functionality of system 100.
Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims. In order to assist the United States Patent and Trademark Office (USPTO) and, additionally, any readers of any patent issued on this application in interpreting the claims appended hereto, Applicant wishes to note that the Applicant: (a) does not intend any of the appended claims to invoke paragraph six (6) of 35 U.S.C. section 112 as it exists on the date of the filing hereof unless the words “means for” or “step for” are specifically used in the particular claims; and (b) does not intend, by any statement in the specification, to limit this disclosure in any way that is not otherwise reflected in the appended claims.
Example C1 is at least one machine readable storage medium having one or more instructions that when executed by at least one processor, cause the at least one processor to partition a resource into a plurality of partitions and allocate a guaranteed amount of each of the plurality of partitions for a specific component or application, where a portion of the guaranteed amount not being used by the specific component or application is allocated as a burst portion for use by any other component.
In Example C2, the subject matter of Example C1 can optionally include where the use of the allocated burst portion is shared by the other components and/or applications in a relatively equal manner.
In Example C3, the subject matter of any one of Examples C1-C2 can optionally include where use of the allocated burst portion is shared using a weighted system, where the other components and/or applications are weighted based on priority.
In Example C4, the subject matter of any one of Examples C1-C3 can optionally include where the one or more instructions further cause the at least one processor to reallocate the guaranteed amount of at least a portion of the plurality of partitions.
In Example C5, the subject matter of any one of Examples C1-C4 can optionally include where at least two of the plurality of partitions are not equal in size.
In Example C6, the subject matter of any one of Examples C1-C5 can optionally include where the one or more instructions further cause the at least one processor to prevent at least one of the other components and/or applications from using the allocated burst portion.
In Example C7, the subject matter of any one of Examples C1-C6 can optionally include where the specific component or application is a critical component or critical application.
In Example A1, an electronic device can include memory, a dynamic resources engine, and at least one processor. The at least one processor is configured to cause the dynamic resources engine to partition a resource into a plurality of partitions, allocate a reserved portion and a corresponding burst portion in each of the plurality of partitions, where each of the allocated reserved portions and corresponding burst portions are reserved for a specific component or application, where any part of the allocated burst portion not being used by the specific component or application can be used by other components and/or applications, create a resource allocation table, where the resource allocation table includes a list of the allocated reserved portion and corresponding burst portion for each of the plurality of partitions and store the resource allocation table in the memory.
In Example A2, the subject matter of Example A1 can optionally include where use of the allocated burst portion not being used by the specific component or application is shared by the other components and/or applications in a relatively equal manner.
In Example A3, the subject matter of any one of Examples A1-A2 can optionally include where use of the allocated burst portion not being used by the specific component or application is shared using a weighted system, where the other components and/or applications are weighted based on priority.
In Example A4, the subject matter of any one of Examples A1-A3 can optionally include where the at least one processor is further configured to cause the dynamic resources engine to reallocate at least one of the reserved burst portions.
In Example A5, the subject matter of any one of Examples A1-A4 can optionally include where at least two of the plurality of partitions are not equal in size.
Example M1 is a method including partitioning a resource into a plurality of partitions and allocating a guaranteed amount of r each of the plurality of partitions for a specific component or application, wherein a portion of the guaranteed amount not being used by the specific component or application is allocated as a burst portion for use by any other component.
In Example M2, the subject matter of Example M1 can optionally include where use of the allocated burst portion not being used by the specific component or application is shared by the other components and/or applications in a relatively equal manner.
In Example M3, the subject matter of any one of the Examples M1-M2 can optionally include where use of the allocated burst portion not being used by the specific component or application is shared using a weighted system, where the other components and/or applications are weighted based on priority.
In Example M4, the subject matter of any one of the Examples M1-M3 can optionally include reallocating the guaranteed amount of at least a portion of the plurality of partitions.
In Example M5, the subject matter of any one of the Examples M1-M4 can optionally include where at least two of the plurality of partitions are not equal in size.
In Example M6, the subject matter of any one of Examples M1-M5 can optionally include preventing at least one of the other components and/or applications from using the allocated burst portion.
Example S1 is a system for resource allocation. The system can include memory, one or more processors, and a dynamic resources engine. The dynamic resources engine is configured to partition a resource into a plurality of partitions and allocate a guaranteed amount of each of the plurality of partitions for a specific component or application, wherein a portion of the guaranteed amount not being used by the specific component or application is allocated as a burst portion for use by any other component.
In Example S2 the subject matter of Example S1 can optionally include where use of the allocated burst portion is shared by the other components and/or applications in a relatively equal manner.
In Example S3, the subject matter of any one of the Examples S1-S2 can optionally include where use of the allocated burst portion is shared using a weighted system, where the other components and/or applications are weighted based on priority.
In Example S4, the subject matter of any one of the Examples S1-S3 can optionally include where the dynamic resources engine is further configured to reallocate the guaranteed amount of at least a portion of the plurality of partitions.
In Example S5, the subject matter of any one of the Examples S1-S4 can optionally include where at least two of the plurality of partitions are not equal in size.
In Example S6, the subject matter of any one of the Examples S1-S5 can optionally include where the dynamic resources engine is further configured to create a resource allocation table, where the resource allocation table includes a list of the the guaranteed amount and corresponding burst portion for each of the plurality of partitions and store the resource allocation table in the memory.
In Example S7, the subject matter of any one of the Examples S1-S6 can optionally include where the dynamic resources engine is further configured to prevent at least one of the other components and/or applications from using the allocated burst portion.
Example AA1 is an apparatus including means for partitioning a resource into a plurality of partitions and allocating a reserved portion and a corresponding burst portion in each of the plurality of partitions. Each of the allocated reserved portions and corresponding burst portions are reserved for a specific component or application, where any part of the allocated burst portion not being used by the specific component or application can be used by other components and/or applications.
In Example AA2, the subject matter of Example AA1 can optionally include where use of the allocated burst portion not being used by the specific component or application is shared by the other components and/or applications in a relatively equal manner.
In Example AA3, the subject matter of any one of Examples AA1-AA2 can optionally include use of the allocated burst portion not being used by the specific component or application is shared using a weighted system, where the other components and/or applications are weighted based on priority.
In Example AA4, the subject matter of any one of Examples AA1-AA3 can optionally include means for reallocating at least one of the reserved burst portions.
In Example AA5, the subject matter of any one of Examples AA1-AA4 can optionally include at least two of the plurality of partitions are not equal in size.
In Example AA6, the subject matter of any one of Examples AA1-AA5 can optionally include means for preventing at least one of the other components and/or applications from using the allocated burst portion not being used by the specific component or application.
In Example AA7, the subject matter of any one of Examples AA1-AA6 can optionally include where the specific component or application is a critical component or critical application.
Example X1 is a machine-readable storage medium including machine-readable instructions to implement a method or realize an apparatus as in any one of the Examples A1-A5, AA1-AA8, or M1-M6. Example Y1 is an apparatus comprising means for performing any of the Example methods M1-M6. In Example Y2, the subject matter of Example Y1 can optionally include the means for performing the method comprising a processor and a memory. In Example Y3, the subject matter of Example Y2 can optionally include the memory comprising machine-readable instructions.