The present disclosure relates to information handling systems and, more specifically, information handling system software applications including microservice-based applications.
As the value and use of information continues to increase, individuals and businesses seek additional ways to process and store information. One option available to users is information handling systems. An information handling system generally processes, compiles, stores, and/or communicates information or data for business, personal, or other purposes thereby allowing users to take advantage of the value of the information. Because technology and information handling needs and requirements vary between different users or applications, information handling systems may also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information may be processed, stored, or communicated. The variations in information handling systems allow for information handling systems to be general or configured for a specific user or specific use such as financial transaction processing, airline reservations, enterprise data storage, or global communications. In addition, information handling systems may include a variety of hardware and software components that may be configured to process, store, and communicate information and may include one or more computer systems, data storage systems, and networking systems.
Information handling systems may be implemented as distributed systems in which two or more information handling components coordinate to execute a function, service, or application. Edge computing is an increasingly pervasive type of distributed computing in which raw data collected by network-enabled sensors and the like is sent to one or more nearby servers, generally referred to as edge servers, which process the raw data and forward the resulting information to cloud-based compute and storage resources for analysis, forecasting, and other purposes, often aided by machine learning algorithms and other types of artificial intelligence.
Edge resources, including edge servers, may face capacity and performance constraints. In the context of microservice based applications, such constraints may limit the number of active microservices that an edge server can support and may result in delay when a new function is invoked by a user as the server must instantiate the microservices associated with each function. In addition, because it is generally difficult to predict accurately when a user might request a particular function, microservices are launched in a purely reactive fashion that may potentially decrease overall performance and user experience.
In accordance with disclosed teachings, a microservice scale up method, system, and computer readable medium generates, accesses, or otherwise obtains information indicating dependencies between a function associated with an external API call and microservices spanned by the external API call. Managed resources are monitored and, upon detecting the function being invoked, a scaling service is launched to access the dependency information, identify the applicable microservices, and perform a scale up operation instantiating some or all of the microservices. The dependency information may be obtained by recording and analyzing traces for instances of the external API call to determine a dependency tree that indicates branches spanned by the external API call and a sequence of microservices corresponding to each branch. The microservices may be scaled up in parallel or in a modified parallel manner wherein one subgroup of the microservices is launched before another subgroup of the microservices. The API may be a RESTful API and each microservice may correspond to an internal API call.
Technical advantages of the present disclosure may be readily apparent to one skilled in the art from the figures, description and claims included herein. The objects and advantages of the embodiments will be realized and achieved at least by the elements, features, and combinations particularly pointed out in the claims.
It is to be understood that both the foregoing general description and the following detailed description are examples and explanatory and are not restrictive of the claims set forth in this disclosure.
A more complete understanding of the present embodiments and advantages thereof may be acquired by referring to the following description taken in conjunction with the accompanying drawings, in which like reference numbers indicate like features, and wherein:
Exemplary embodiments and their advantages are best understood by reference to
For the purposes of this disclosure, an information handling system may include any instrumentality or aggregate of instrumentalities operable to compute, classify, process, transmit, receive, retrieve, originate, switch, store, display, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, entertainment, or other purposes. For example, an information handling system may be a personal computer, a personal digital assistant (PDA), a consumer electronic device, a network storage device, or any other suitable device and may vary in size, shape, performance, functionality, and price. The information handling system may include memory, one or more processing resources such as a central processing unit (“CPU”), microcontroller, or hardware or software control logic. Additional components of the information handling system may include one or more storage devices, one or more communications ports for communicating with external devices as well as various input/output (“I/O”) devices, such as a keyboard, a mouse, and a video display. The information handling system may also include one or more buses operable to transmit communication between the various hardware components.
Additionally, an information handling system may include firmware for controlling and/or communicating with, for example, hard drives, network circuitry, memory devices, I/O devices, and other peripheral devices. For example, the hypervisor and/or other components may comprise firmware. As used in this disclosure, firmware includes software embedded in an information handling system component used to perform predefined tasks. Firmware is commonly stored in non-volatile memory, or memory that does not lose stored data upon the loss of power. In certain embodiments, firmware associated with an information handling system component is stored in non-volatile memory that is accessible to one or more information handling system components. In the same or alternative embodiments, firmware associated with an information handling system component is stored in non-volatile memory that is dedicated to and comprises part of that component.
For the purposes of this disclosure, computer-readable media may include any instrumentality or aggregation of instrumentalities that may retain data and/or instructions for a period of time. Computer-readable media may include, without limitation, storage media such as a direct access storage device (e.g., a hard disk drive or floppy disk), a sequential access storage device (e.g., a tape disk drive), compact disk, CD-ROM, DVD, random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), and/or flash memory; as well as communications media such as wires, optical fibers, microwaves, radio waves, and other electromagnetic and/or optical carriers; and/or any combination of the foregoing.
For the purposes of this disclosure, information handling resources may broadly refer to any component system, device or apparatus of an information handling system, including without limitation processors, service processors, basic input/output systems (BIOSs), buses, memories, I/O devices and/or interfaces, storage resources, network interfaces, motherboards, and/or any other components and/or elements of an information handling system.
In the following description, details are set forth by way of example to facilitate discussion of the disclosed subject matter. It should be apparent to a person of ordinary skill in the field, however, that the disclosed embodiments are exemplary and not exhaustive of all possible embodiments.
Throughout this disclosure, a hyphenated form of a reference numeral refers to a specific instance of an element and the un-hyphenated form of the reference numeral refers to the element generically. Thus, for example, “device 12-1” refers to an instance of a device class, which may be referred to collectively as “devices 12” and any one of which may be referred to generically as “a device 12”.
As used herein, when two or more elements are referred to as “coupled” to one another, such term indicates that such two or more elements are in electronic communication, mechanical communication, including thermal and fluidic communication, thermal, communication or mechanical communication, as applicable, whether connected indirectly or directly, with or without intervening elements.
Referring now to the drawings,
The illustrated method 100 includes a learning or acquisition phase, described below in reference to
The method 100 of
In at least one embodiment, efficiency is achieved by scaling up at least one instance of all of the applicable microservices in parallel to reduce the overall scale up delay associated with a conventional configuration, in which microservices are activated sequentially, one-at-a-time, as the internal API call corresponding to each span of the function is made. Other embodiments may achieve a potentially lesser, but still significant, degree of efficiency by scaling up sub-groups of the microservices in parallel. For example, if a user function spans a sequence of four microservices, the scale up operation may, as an alternative to scaling up all four microservices in parallel, scale up a first subgroup, e.g., the first two microservices, in parallel and then, while the first and second microservices are executing, scale up second subgroup, i.e., the third and fourth microservices, in parallel. In this example, the use of subgroups to scale up the required microservices in two, rather than one, parallel operations, may result in little or no additional scale up delay if the time required to execute the first two microservices is longer than the time required to scale up the third and fourth microservices. The management resource may, in at least some embodiments, be configured to define one or more microservice subgroups and to perform a parallel scale up operation for each subgroup.
Referring now to
In at least some embodiments, the external and internal APIs associated with the external an internal API calls illustrated in
At least some embodiments that employ RESTful APIs may leverage RESTful API tracing tools including, as an illustrative and non-limiting example, VMware Tanzu Observability software, to develop a database 210 of API tracing data.
The dependency tree information may include information indicative of one or more branches 252 that a user function might follow as well as the sequence of microservices 204 executed within each branch. In some embodiments, branch information may include probability information indicating the likelihood that any particular branch is followed. In these embodiments, the branch probability information may be used to define one or more microservice subgroups wherein, as discussed previously, parallel scale up operations are performed for each of two or more microservice subgroups. As an example, if the particular sequence of microservices, represented in
Turning now to
Referring now to
This disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments herein that a person having ordinary skill in the art would comprehend. Similarly, where appropriate, the appended claims encompass all changes, substitutions, variations, alterations, and modifications to the example embodiments herein that a person having ordinary skill in the art would comprehend. Moreover, reference in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, or component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative.
All examples and conditional language recited herein are intended for pedagogical objects to aid the reader in understanding the disclosure and the concepts contributed by the inventor to furthering the art, and are construed as being without limitation to such specifically recited examples and conditions. Although embodiments of the present disclosure have been described in detail, it should be understood that various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the disclosure.