APPARATUS, METHOD AND STORAGE MEDIUM FOR INTEGRATED MANAGEMENT OF VIRTUALIZATION OF COMPUTER RESOURCES

Information

  • Patent Application
  • 20250117243
  • Publication Number
    20250117243
  • Date Filed
    October 09, 2024
    a year ago
  • Date Published
    April 10, 2025
    8 months ago
Abstract
Disclosed herein is an apparatus, method, and storage medium for integrated management of virtualization of computer resources. The apparatus receives a user request from a user; classifies the received user request depending on virtualization models for the one or more nodes, provides the classified user request to at least one interface of the virtualization models and performs an integration manager of the virtualization models.
Description
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of Korean Patent Applications No. 10-2023-0133976, filed Oct. 10, 2023, No. 10-2024-0021432, filed Feb. 15, 2024, and No. 10-2024-0104565, filed Aug. 6, 2024, which is hereby incorporated by reference in their entireties into this application.


BACKGROUND OF THE INVENTION
1. Technical Field

The present disclosure relates generally to distributed cloud technology, and more particularly to technology for integrated management of computer resource virtualization.


2. Description of the Related Art

In current distributed edge cloud and cloud-computing environments, containerized platforms have become main platforms for managing the lifecycle of applications. Also, when a container environment is managed and operated as a cluster, not a single node but multiple nodes are operated and managed. In this case, containers are designed to be operated and managed in an integrated manner by utilizing a common container interface.


However, cloud-computing environments have provided virtual machine environments for a long time. Containers are technology for sharing a kernel of a host, and may be simultaneously used on bare metal or virtual machines having a kernel. Therefore, the virtual machines are implemented in various ways using the containers.


In the industry, existing applications that are currently under development are gradually transitioning from virtual machines to containers, or applications adopting containers are already being provided. However, there are still many applications executed on virtual machines or bare metal, and this causes the challenge of managing both infrastructures.


It is impossible to replace all virtual machine infrastructures with container infrastructures due to applications designed for a user-defined kernel, specific kernel parameter requirements, or a structure that is too complex to change to containers.


Currently, the platform most suitable for cloud computing is a platform where virtual machines and containers can reside together. Therefore, a structure and method capable of managing existing hypervisor-based virtual machines and containers together is required.


Meanwhile, U.S. Pat. No. 10,884,816, titled “Managing system resources in containers and virtual machines in a coexisting environment”, discloses a resource management method, system, and computer program for creating a dummy virtual machine (VM) in a virtual machine (VM) hypervisor for resource management, creating a dummy container in a container engine for resource management, and adding a hook to each VM.


SUMMARY OF THE INVENTION

An object of the present disclosure is to provide an integrated management method and structure for integrated management of containers and virtual machines and single-node and multi-node scale-up in a distributed cloud.


Another object of the present disclosure is to improve security and stability by isolating various applications or services from each other.


A further object of the present disclosure is to conserve resources by sharing the same underlying hardware.


Yet another object of the present disclosure is to provide a consistent method of deploying and managing applications, thereby simplifying management.


Still another object of the present disclosure is to facilitate adoption of legacy virtual machines or containers.


Still another object of the present disclosure is to provide a high-performance architecture for efficient collaboration between clusters.


Still another object of the present disclosure is to improve efficiency of containers for high-performance containers and data linkage between containers.


Still another object of the present disclosure is to configure a high-speed network for collaborative services between clusters.


Still another object of the present disclosure is to provide optimal management technology for clusters for integrated management of virtual machines and containers over interconnected networks.


In order to accomplish the above objects, an apparatus for integrated management of virtualization of computer resources according to an embodiment of the present disclosure includes a memory configured to store data and one or more computing nodes to use computing resources including the memory and a processor processing the data, wherein the processor configured to receive a user request from a user, classify the received user request depending on virtualization models for the one or more nodes, provide the classified user request to at least one interface of the virtualization models and perform an integration manager of the virtualization models.


Here, the one or more computing nodes include a first computing node providing a first virtualization model, wherein the first virtualization model provides a container running on a kernel of an operating system (OS) of the first computing node and a virtual machine running with a hypervisor on the kernel, and wherein the integration manager performs an integrated management of the virtualization models.


Here, the one or more computing nodes include a second computing node and a third computing node, the second and third computing nodes providing a second virtualization model, wherein the second computing node provides a container running on a kernel of an operating system (OS) of the second computing node, wherein the third computing node provides a virtual machine running with a hypervisor on a kernel of the third computing node, and wherein the integration manager performs an integrated management of the virtualization models.


Here, the one or more computing nodes include a fourth computing node providing a third virtualization model, wherein the third virtualization model provides a container within a virtual machine, the virtual machine running with a hypervisor on a kernel of an operating system (OS) of the fourth computing node, and wherein the integration manager performs an integrated management of the virtualization models.


Also, in order to accomplish the above objects, a method for integrated management of virtualization of computer resources according to an embodiment of the present disclosure includes receiving a user request from a user, classifying the received user request depending on virtualization models for one or more nodes in the computer resources, providing the classified user request to at least one interface of the virtualization models and performing an integration manager of the virtualization models.


Here, the one or more computing nodes include a first computing node providing a first virtualization model, wherein the first virtualization model provides a container running on a kernel of an operating system (OS) of the first computing node and a virtual machine running with a hypervisor on the kernel, and herein the integration manager performs an integrated management of the virtualization model.


Here, the one or more computing nodes include a second computing node and a third computing node, the second and third computing nodes providing a second virtualization model, wherein the second computing node provides a container running on a kernel of an operating system (OS) of the second computing node, wherein the third computing node provides a virtual machine running with a hypervisor on a kernel of the third computing node and wherein the integration manager performs an integrated management of the virtualization models.


Here, the one or more computing nodes include a fourth computing node providing a third virtualization model, wherein the third virtualization model provides a container within a virtual machine, the virtual machine running with a hypervisor on a kernel of an operating system (OS) of the fourth computing node and wherein the integration manager performs an integrated management of the virtualization models.


Also, in order to accomplish the above objects, a non-transitory storage medium for storing a computer-executable program for integrated management of virtualization of computer resources according to an embodiment of the present disclosure stores a computer-executable program for integrated management of virtualization of computer resources. The computer-executable program executes instructions including receiving a user request from a user, classifying the received user request depending on a type of virtualization models, providing the classified user request to at least one interface of the virtualization models and performing an integration manager of the virtualization models.





BRIEF DESCRIPTION OF THE DRAWINGS

The above and other objects, features, and advantages of the present disclosure will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings, in which:



FIG. 1 a view illustrating a container-based virtualization model according to an embodiment of the present disclosure;



FIG. 2 is a view illustrating a host-based virtualization model according to an embodiment of the present disclosure;



FIG. 3 is a view illustrating a hybrid virtualization model according to an embodiment of the present disclosure;



FIG. 4 is a block diagram illustrating an integrated management structure for a virtual machine and a container of a single node in an apparatus for integrated management of virtualization of computer resources according to an embodiment of the present disclosure;



FIG. 5 is a view illustrating an example of the structure of an in-memory-based container storage system according to the present disclosure;



FIG. 6 is view illustrating an example of the detailed structure of the in-memory-based container storage system illustrated in FIG. 5;



FIG. 7 is a view illustrating an example of the detailed structure of the in-memory container storage engine illustrated in FIG. 5;



FIG. 8 is a view illustrating an example of a method of creating a container in-memory storage according to the present disclosure;



FIG. 9 is a view illustrating an integrated management structure for a virtual machine and a container based on a single cluster in a distributed cloud system according to an embodiment of the present disclosure;



FIG. 10 is a view illustrating the structure of a VM controller for supporting an in-memory disk according to an embodiment of the present disclosure;



FIG. 11 is a view illustrating the structure of a container controller for supporting an in-memory disk according to an embodiment of the present disclosure;



FIG. 12 is a view illustrating an example of a container file system implemented in an in-memory-based container storage system according to the present disclosure;



FIG. 13 is a view illustrating an example of an image sharing environment of in-memory container storage according to the present disclosure;



FIG. 14 is a view illustrating an example of a configuration for a user sharing environment according to the present disclosure;



FIG. 15 is a view illustrating an example of the detailed structure of the in-memory container storage management module illustrated in FIG. 5;



FIG. 16 is a flowchart illustrating an example of a detailed process for data sharing management in an in-memory container storage management module according to the present disclosure;



FIG. 17 is a view illustrating an integrated management structure for a virtual machine and a container based on multiple clusters in a distributed cloud system according to an embodiment of the present disclosure;



FIG. 18 is a flowchart illustrating a method for integrated management of virtualization of computer resources according to an embodiment of the present disclosure; and



FIG. 19 is a view illustrating a computer system according to an embodiment of the present disclosure.





DESCRIPTION OF THE PREFERRED EMBODIMENTS

The present disclosure will be described in detail below with reference to the accompanying drawings. Repeated descriptions and descriptions of known functions and configurations which have been deemed to unnecessarily obscure the gist of the present disclosure will be omitted below. The embodiments of the present disclosure are intended to fully describe the present disclosure to a person having ordinary knowledge in the art to which the present disclosure pertains. Accordingly, the shapes, sizes, etc. of components in the drawings may be exaggerated in order to make the description clearer.


Throughout this specification, the terms “comprises” and/or “comprising” and “includes” and/or “including” specify the presence of stated elements but do not preclude the presence or addition of one or more other elements unless otherwise specified.


Hereinafter, a preferred embodiment of the present disclosure will be described in detail with reference to the accompanying drawings.



FIG. 1 is a view illustrating a container-based virtualization model according to an embodiment of the present disclosure.


Referring to FIG. 1, the container-based virtualization model 10 may be used to execute an application in a virtual machine of a physical machine. The container-based virtualization model 10 may be useful for applications that require a specific operating system (OS) or hardware environment independent of the OS of the physical system.


The container-based virtualization model 10 may be a common approach for using containers and virtual machines together. In the container-based virtualization model 10, a container may be executed in a separate virtual machine or executed through multiple virtual machines. The container-based virtualization models 10 may provide a high level of isolation and security for each container.



FIG. 2 is a view illustrating a host-based virtualization model according to an embodiment of the present disclosure.


Referring to FIG. 2, the host-based virtualization model 20 allows a container to be directly executed on a host operating system (OS) kernel and allows a virtual machine to be executed through a hypervisor on the same kernel. The host-based virtualization model may be useful for applications that improve resource efficiency and flexibility in operation management.


The virtual machine uses the hardware of a host machine exclusively, which may reduce resource efficiency. However, the container provides a runtime environment that shares the hardware of the host system and includes only a portion of the host OS, which may improve resource efficiency. Therefore, when the virtual machine and the container are used together, resource efficiency and operation flexibility may be improved.


The host-based virtualization model 20 may be a common model capable of executing the container directly on the host OS. The host-based virtualization model 20 may provide significant performance improvement to containerized applications. Also, the host-based virtualization model 20 may use the virtual machine when an application requires isolation and security.



FIG. 3 is a view illustrating a hybrid virtualization model according to an embodiment of the present disclosure.


Referring to FIG. 3, the hybrid virtualization model (30, 31) may execute applications in various physical machines using a combination of containers and virtual machines. Therefore, the hybrid virtualization model (30, 31) may be useful for applications that require an infrastructure mixture that includes infrastructures without virtualization or clustering of physical systems in order to operate a distributed cloud system (e.g., a distributed cloud and edge computing).


The hybrid virtualization model (30, 31) is a more flexible model that allows a mixture of infrastructures regardless of whether a container and a virtual machine are present. In the hybrid virtualization model (30, 31), some applications may be executed through containers, whereas other applications may be executed through virtual machines or non-virtualized physical machines. The hybrid virtualization model (30, 31) may be useful for a Cloud Service Provider (CSP) that adopts legacy applications without virtualization.


An integration model for using a container and a virtual machine together may vary depending on specific requirements of the CSP.


A CSP requiring a mixture of containerized infrastructures and non-containerized infrastructures may consider the hybrid virtualization model (30, 31). A CSP requiring a high level of isolation and security may consider the container-based virtualization model 10. Also, a CSP requiring a high level of performance may consider the host-based virtualization model 20.


An apparatus for integrated management of virtualization of computer resources, corresponding to a distributed cloud system, according to an embodiment of the present disclosure may perform integrated management of a virtual machine and a container based on a single node.


Main components managed by the container for management of the integration model may include a container file system, a container engine, and a container image. The container file system is a system for storing and managing container images.


The container engine is a system for executing and managing a container using a container image.


The container image is a software package configured to execute an application for a container.


The virtual machine may be executed and managed through a hypervisor. Components managed by the hypervisor may include a virtual machine, a virtual-machine image, a host resource interface to be allocated to the virtual machine, and the like.



FIG. 4 is a block diagram illustrating an integrated management structure for a virtual machine and a container of a single node in an apparatus for integrated management of virtualization of computer resources according to an embodiment of the present disclosure.


Referring to FIG. 4, it can be seen that an integrated management structure for a virtual machine and a container based on a single node in an apparatus for integrated management of virtualization of computer resources according to an embodiment of the present disclosure is illustrated.


The apparatus for integrated management of virtualization of computer resources according to an embodiment of the present disclosure may include a virtual machine-(VM-) container integration controller 110, a virtual machine management handler 120, a container management handler 130, a virtual machine-(VM-) container executor 140, and a storage management unit 150.


Here, the VM-container integration controller 110 may perform integrated management of a single node.


Here, the VM-container integration controller 110 may receive a request (command) from a user (a CSC's interface).


Here, the computing node that uses a computing resource may receive the user request through the VM-container integration controller 110.


Here, the VM-container integration controller 110 may classify the request (command) of the user depending on whether it corresponds to VM management or container management and may transmit the request to any one of the virtual machine management handler 120 and the container management handler 130.


Here, the VM-container integration controller 110 may classify the received user request depending on the type of the virtualization machine of the computing resource.


Here, the VM-container integration controller 110 may include an API service for handling and storing the request of the user (the CSC's interface).


Here, the VM-container integration controller 110 may operate at a higher level by focusing on a software agent in the API, as opposed to the handler that directly interacts with a VM.


Here, the VM-container integration controller 110 may continuously compare the desired state of a virtual machine instance (VMI) (defined in the VMI) with the actual state of a hypervisor in order to adjust the desired state and the actual state.


Here, when there is a discrepancy between the desired state and the actual state, the VM-container integration controller 110 may take a necessary measure for adjustment.


Here, the VM-container integration controller 110 may perform a task, such as creating, starting, stopping, updating, or deleting a VM based on a predefined VMI configuration.


Here, the VM-container integration controller 110 may perform the VM task in collaboration with additional components for integrated management of a virtual machine and a container, such as the handlers 120 and 130 and the executor 140, rather than independently operating for interaction with the additional components.


Here, in order to start a new VM, the VM-container integration controller 110 may send a guideline for managing the actual VM in the hypervisor to the handlers 120 and 130 while coordinating with a launcher.


Here, the VM-container integration controller 110 may facilitate the connection between a virtual machine instance (VMI) and a Kubernetes pod through pod connection processing. Accordingly, the pod may integrate virtualized workloads smoothly by interacting with a specific VM.


Here, the VM-container integration controller 110 may manage a container-based virtualization machine and a hybrid virtualization machine in an integrated manner.


The container-based virtualization machine may virtualize and provide an application that is containerized by being installed in a first virtualization machine that virtualizes a computing resource.


The hybrid virtualization machine may virtualize and provide an application that is containerized by being installed in a second virtualization machine that virtualizes a computing resource and/or may virtualize and provide an application containerized on the computing resource.


Here, the VM-container integration controller 110 may receive a user request, provide the user request to a first interface of the container-based virtualization machine and a second interface of the hybrid virtualization machine, and perform integrated management of virtualization of the computing node.


Here, the VM-container integration controller 110 may classify the received user request depending on the type of the virtualization machine and provide the user request, which is classified depending on the type of the virtualization machine, to the first interface or the second interface.


Here, the container-based virtualization machine and the hybrid virtualization machine may be installed in an OS kernel in the computing node.


Here, the VM-container integration controller 110 may provide an image management function for the container-based virtualization machine or the hybrid virtualization machine depending on the image management function installed in the OS kernel in the computing node.


Here, the VM-container integration controller 110 may manage the container-based virtualization machine or the hybrid virtualization machine based on a library on an OS in the computing node or through a software daemon in the computing node.


The virtual machine and container management handlers 120 and 130 may receive a request of a CSC from the VM-container integration controller 110 and transmit the same to the virtual machine and container executor 140.


Here, the virtual machine and container management handlers 120 and 130 may receive the initial specifications of a virtual machine and a container and send a signal to start any one of the virtual machine and the container corresponding to each execution program.


Here, the virtual machine and container management handlers 120 and 130 may manage the lifecycle of the virtual machine and container and communication thereof with the host OS, such as network traffic forwarding.


Here, the virtual machine and container management handlers 120 and 130 may represent the virtual machine as a Virtual Machine Instance (VMI).


Here, the virtual machine and container management handlers 120 and 130 may manage Virtual Machine Instances (VMIs) in the integrated management for virtual machines and containers.


Also, the virtual machine and container management handlers 120 and 130 may perform VMI lifecycle management.


Here, when a VMI is created through an API, the virtual machine and container management handlers 120 and 130 may receive a VMI specification and execute the VM by sending a signal to an execution program, which is another component.


Here, the virtual machine and container management handlers 120 and 130 may interact with the underlying hypervisor using a library, configure the VM based on the detailed information of the VMI, and start the VM.


Here, the virtual machine and container management handlers 120 and 130 may continuously monitor the state of the VM that is being executed.


Here, the virtual machine and container management handlers 120 and 130 may receive a signal (e.g., a crash) from the VM and update the VMI state in the API based thereon.


Here, the virtual machine and container management handlers 120 and 130 may safely stop or terminate the VM when there is an instruction.


Also, the virtual machine and container management handlers 120 and 130 may bridge the controller and a VM container.


Here, the virtual machine and container management handlers 120 and 130 may serve as a communication bridge between a cluster and a guest VM.


Accordingly, the virtual machine and container management handlers 120 and 130 may use functions such as real-time migration, console access, network traffic transfer between the VM and the container, and the like.


Here, when the API (e.g., resource allocation) is changed by the desired state of the VMI, the virtual machine and container management handlers 120 and 130 may perform agent update by which the change is converted into adjustment of the VM itself and by which whether the adjustment matches the desired configuration is checked.


Also, the virtual machine and container management handlers 120 and 130 implement their own heartbeat mechanisms, thereby detecting unresponsive nodes in the cluster.


Accordingly, the virtual machine and container management handlers 120 and 130 may identify and solve problems more quickly.


Also, the virtual machine and container management handlers 120 and 130 may design a security function as a single authorized component in the integrated management for virtual machines and containers.


Here, the virtual machine and container management handlers 120 and 130 may handle sensitive tasks such as VM creation and configuration that require root access.


Generally, the virtual machine and container management handlers 120 and 130, which are essential components for the integrated management for virtual machines and containers, manage the lifecycle of the VM, establish communication between the controller 110 and a guest, and perform other critical tasks, thereby ensuring smooth operation of virtualized workloads.


Here, the virtual machine and container management handlers 120 and 130 may handle VM lifecycle management and communication.


The handlers 120 and 130 and the controller 110 may manage the VM lifecycle and interact with an extensive ecosystem for integrated management of virtual machines and containers.


An abstraction level allows the controller 110 to handle an upper-level representation of a VM (VMI) in the API and allows the handlers 120 and 130 to handle lower-level interaction with the actual VM in the hypervisor.


The controller 110 manages the entire lifecycle of a VMI depending on the desired state, whereas the handlers 120 and 130 may perform a specific task, such as starting, stopping, and monitoring a physical VM.


In summary, both the controller 110 and the handlers 120 and 130 are critical components for integrated management for virtual machines and containers, and may take complementary roles in virtual machine management.


The controller 110 may reflect the state desired by the VMI and coordinate with other components, whereas the handlers 120 and 130 may manipulate the actual VM by performing a specific task on the hypervisor.


The VM-container executor 140 may use software installed in the OS kernel.


The VM-container executor 140 may include a virtual machine launcher and a container launcher.


The virtual machine launcher and the container launcher may support various types of interface methods to be executed to start a virtual machine and a container.


The virtual machine launcher and the container launcher may be used as libraries of the host operating system or as software agents.


In order to provide control groups (cgroup) and namespace, the VM-container executor 140 may send a guideline to a controller agent through an API when a virtual machine instance (VMI) is created. According to the guideline, the controller agent may create an instance specifically for the corresponding VMI. Within this instance, the underlying container may execute an execution program.


The VM-container executor 140 may provide cgroup and namespace required for a VM process as the primary role of a launch manager. This may be a primary kernel mechanism for isolating and controlling resources (a CPU, memory, etc.) for individual VMs and network visibility.


The VM-container executor 140 may play a more intensive role in setting the initial environment of a VM.


The VM-container executor 140 may set the initial VM environment (cgroup, namespace, and configuration).


The VM-container executor 140 operates within a VM instance itself, whereas the handlers 120 and 130 may interact directly with the hypervisor and the controller 110 may supervise VMI management at a higher level in the API.


The VM-container executor 140 interacts with libraryt, whereby the execution program may manage VM creation and configuration in the underlying hypervisor (e.g., Kernel-based Virtual Machine (KVM)) through the library. The VM-container executor 140 may define the resource and configuration of a VM using a VMI specification.


The storage management unit 150 may use in-memory-based storage and an in-memory-based container structure in order to use data and storage between a container and a VM.


Among the usage models of a virtual machine and a container, models configured in a single node may include a host-based virtualization model and a container-based virtualization model.


In the case of a hybrid virtualization model, a virtual machine is considered a node and may be connected to another virtual machine having a container.


The storage management unit 150 includes an image manager for storing and managing data for image management for both a container and a virtual machine, and may improve the performance of the container and virtual machine by utilizing various types of high-performance storage (memory, Non-Volatile Memory express (NVMe), a Solid-State Drive (SSD), federation storage, etc.).


The storage management unit 150 may use an in-memory-based container storage system as a repository for configuring a virtual machine image or a container file system (an additional function for image management).



FIG. 5 is a view illustrating an example of the structure of an in-memory-based container storage system according to the present disclosure.


Referring to FIG. 5, the in-memory container storage system according to the present disclosure may include in-memory container storage 510, an in-memory container storage engine 520, main memory, disk storage, and remote storage.


Hereinafter, the structure and operation flow of the in-memory container storage system according to the present disclosure will be described in more detail with reference to FIG. 6.


First, a container may create in-memory container storage 610, which is storage on the main memory having nonvolatile characteristics, and configure a storage volume of the container on the in-memory container storage 610.


The container may create and operate the container storage volume, which is the volume of a file system (an example of a Docker is/var/lib/docker) in which the container is executed, on the in-memory container storage 610. Accordingly, a container access command created in the container may be transferred to the in-memory container storage 610.


An in-memory container storage engine 620 may create in-memory container storage 610 having a single shape by unifying main memory, disk storage, and remote storage. Also, the in-memory container storage engine 620 processes a disk access command by utilizing the main memory, the disk storage, and the remote storge in an integrated manner.


Here, the in-memory container storage 610 may operate without modification by providing an interface of a standard block storage format through the in-memory container storage engine 620.


Hereinafter, the structure of the in-memory container storage engine according to the present disclosure will be described in more detail with refence to FIG. 7.


Referring to FIG. 7, the in-memory container storage engine 700 may include a storage interface module 710, a storage access distribution module 720, and a storage control module 730.


The storage interface module 710 may provide an interface of a standard block storage format and receive a disk access command created in a container. The received command may be transferred to the storage access distribution module 720.


The storage access distribution module 720 may determine whether to use main memory storage, disk storage, or remote storage in order to run a service, depending on the characteristics of the disk access command. Subsequently, the access command may be transferred to a main memory control module, a disk storage control module, and a remote storage control module included in the storage control module 730.


The storage control module 730 may include the main memory control module, the disk storage control module, the remote storage control module, a main memory disk generation module, a disk backup/restore module, and a real-time synchronization module.


The main memory control module may process a disk access command using the main memory, thereby providing high-speed access.


For example, when the main memory control module receives disk access commands, the disk access commands transferred in units of blocks may be processed to perform actual read/write operations on the main memory, which is accessible by address, through the main memory disk generation module. Accordingly, data of a virtual disk may be created and stored in the main memory.


The disk storage control module may process a virtual disk access command using the disk storage.



FIG. 8 is a view illustrating an example of a container in-memory storage creation method according to the present disclosure.


Referring to FIG. 8, a method of creating container in-memory storage 800 of a single hybrid type through integration of main memory storage 810 and disk storage 820 is illustrated.


The container in-memory storage 800 provides a standard block storage format and may be created by mapping the area of the main memory storage 810 to the front part of the storage and mapping the area of the disk storage 820 to the rear part thereof.


For example, the areas corresponding to block IDs 1 to N of the main memory storage 810 may be mapped to the areas corresponding to block IDs 1 to N of the container in-memory storage 800. Also, the areas corresponding to block IDs 1 to M of the disk storage 820 may be mapped to the areas corresponding to block IDs N+1 to N+M of the container in-memory storage 800. Here, a storage boundary for separating the area of the main memory storage 810 from the area of the disk storage 820 may be set between the block having the ID of N and the block having the ID of N+1 in the container in-memory storage 800.



FIG. 9 is a view illustrating an integrated management structure for a virtual machine and a container based on a single cluster in a distributed cloud system according to an embodiment of the present disclosure.


Referring to FIG. 9, it can be seen that integrated management of a virtual machine and a container in a single cluster is illustrated.


In FIG. 9, a controller node 90 may serve to configure, manage, monitor, and control a cluster. The controller node 90 may share data between nodes and distribute tasks using a data repository.


The data repository may provide distributed repositories for cluster configuration sharing, a service search, and scheduler adjustment.


In a single cluster model, the controller node 90 may include a VM-container integration controller, and each node 100 corresponding to an apparatus for integrated management of virtualization of computer resources may include a VM-container control agent connected to the VM-container integration controller of the controller node 90.


A request of a CSC may be transferred to the VM-container integration controller of each node 100 through the API server of the controller node 90.


The VM-container integration controller of each node 100 may execute the request of the CSC and send the request of the CSC to virtual machine and container management handlers through the VM-container integration control agent. Each node 100 may have the same structure as the single-node model described in FIG. 4.


The VM-container integration controller of the controller node 90 may manage the entire lifecycle of virtual machines and containers in the cluster in the API server. In addition to the single-node model, the VM-container integration controller may provide scheduling and policy reflection functions.


The VM-container integration control agent is provided as a software program, and a virtual machine management handler or a container management handler may be deployed depending on whether the current system is a container-based system or a virtual-machine-based system.


For example, when the current system is a container-based system, a virtual machine controller may be deployed, whereas when it is a virtual-machine-based system, a container controller for managing each model may be deployed.


The controller is provided in the form of software, and a VM handler or a container handler may be deployed depending on whether the current system is a container-based system or a VM management system. For example, in the case of OpenStack, in which virtual machine management is a core structure, the container handler configures a container-based virtualization model that includes a container structure in a virtual machine, whereby the virtual machine and the container may be used together. In the case of a container-based structure such as Kubernetes (k8s), a handler (Custom Resource Definitions (CRD) operator) for a virtual machine is deployed, whereby it is managed as a hybrid model.



FIG. 10 is a view illustrating the structure of a VM controller for supporting an in-memory disk according to an embodiment of the present disclosure.


Referring to FIG. 10, it can be seen that the structure of a virtual machine controller that supports in-memory is illustrated.


The VM controller is deployed on a container-based management platform, and may interface with a VM-container integration manager and support in-memory disk management through an in-memory disk manager.


The VM controller may be connected to a virtual machine through libvirtd on a hypervisor (KVM).


Fundamentally, hardware information of an in-memory-based virtual machine may be collected through a hardware profile collector, which collects information from a kernel-level hardware profile.


Hypervisor information and operation information may be collected through a virtual machine information collector (VM Info Collector) having a libvirtd interface. The collected information may be used by a performance generator to generate real-time data on the utilization of each resource of the virtual machine and the utilization of the platform.


Also, in order to provide the real-time data to a management system, the controller may store the data in shared memory in real time, transmit the data to a master through a management interface, and store the data in a repository (etcd).


The in-memory disk manager may serve to manage an in-memory-based virtual machine and perform an image loading function for memory operation when a system boots or when an in-memory virtual machine is created. Also, the controller may perform processing of control information of the platform associated with virtual machine control and management through a command executor (Cmd Executer) in order to execute each command transmitted from the API server.



FIG. 11 is a view illustrating the structure of a container controller for supporting an in-memory disk according to an embodiment of the present disclosure.


Referring to FIG. 11, it can be seen that the structure of a container controller for supporting in-memory is illustrated. The controller is deployed on a container-based management platform, and may interface with a VM-container integration manager and use a container file system using in-memory storage using an in-memory disk.


Fundamentally, hardware information of a container may be collected through a hardware profile collector, which collects information from a kernel-level hardware profile.


Container-related information, such as an image, a volume, and domain information related to a container, may be collected through a container information collector (Container Info Collector).


The collected information may be used by a performance generator to generate real-time data on the utilization of each resource of the container and the utilization of the platform.


Also, in order to provide the real-time data to a management system, the controller may store the data in shared memory in real time, transmit the data to a master through a management interface, and store the data in a repository (etcd).


An in-memory disk manager may serve to manage an in-memory-based virtual machine and perform an image loading function for memory operation when a system boots or when an in-memory virtual machine is created.


Also, the controller may control the platform associated with container control and management through a container conductor in order to execute the container infrastructure management API transmitted from the API server. The container conductor may provide a command for applying to the existing virtual-machine-based management system in the form of a template. Accordingly, the container may be run and controlled in a lower worker node.



FIG. 12 is a view illustrating an example of a container file system implemented in in-memory storage according to the present disclosure.


Referring to FIG. 12, the file system used by a container according to the present disclosure may be configured on in-memory container storage.


According to the present disclosure, the underlying file system of a container may be run in main memory in order to run the container in the main memory. For example, the container may provide the files required by a user individually by utilizing the unifying file system function included in the kernel of an existing Linux environment.


Here, the unifying file system function is the concept of mounting multiple file systems on a single mount point, and all directory entries may be unified and processed on a virtual file system (VFS) layer, rather than creating a new file system type. Accordingly, using the unifying file system function, the directory entries of the lower file systems may be merged with directory entries of the upper file system, whereby a logical combination of all of the mounted file systems may be created. Therefore, management of all of the mounted file systems shared in the system and searching for files may be locally performed, and file management for full sharing may be facilitated.


In other words, the container file system according to the present disclosure may be configured in the form of layers as a unifying file system.


The respective layers categorized into a merged access area 930, a container layer 920, and an image layer 910 may operate by creating and mounting a specific directory in the in-memory container storage.


The container layer 920 is a writeable layer and is created on the top layer such that each container can have its own state. Here, after a container is created, all modification tasks may be performed in the container layer 920. Also, read/write operations in the container layer 920 may be performed at high speeds because the read/write operations are performed on memory. Also, for the efficiency of file management, the container layer 920 may include information about a difference between an actual image and a container image.


The image layer 910 is a read-only layer, and may be shared with other containers. Here, an image shared with other layers may be operated as multiple images in the container layer 920.


That is, the image layer 910 may improve the efficiency by sharing a container image with multiple different systems.


For example, as illustrated in FIG. 12, a container image of the image layer 910 needs to be pulled from a public repository (e.g., github) when a container is deployed. Here, the image used in the container system may be stored locally or fetched in advance in order to ensure performance, whereby efficient operation may be performed.


The present disclosure proposes a method of storing the already pulled image in shared storage in order to reuse the image. As described above, a lot of images of the image layer 910 are present in the in-memory container storage, and the container images of the entire system are backed up and stored in disk storage or remote storage, and the container images may be added to the image layer 910. Accordingly, the container images of the entire system may be used also in the container layer 920, and the images may be continuously provided also through the merged access area 930.


The merged access area 930 may include link information of the layers such that all file systems of the container layer 920 and the image layer 910 are accessible, and the link information may be shared with a user so as to enable file access.



FIG. 13 is a view illustrating an example of an image sharing environment of in-memory container storage according to the present disclosure.


Referring to FIG. 13, shared storage 1000 may be used to provide shared data in in-memory container storage according to the present disclosure.


For example, the shared storage 1000 may be network file storage (a storage area network (SAN), network-attached storage (NAS), etc.) or storage connected to a local disk.


Referring to FIG. 13, the image sharing environment according to the present disclosure may have a structure that provides a user with a container image stored in the shared storage 1000 in response to a request of the user.


For example, a sharing management function may be provided through the container file system layer management module of the in-memory container storage management module illustrated in FIGS. 5 and 7, and shared data 1010 may be provided to a user by individually configuring the area for file sharing and providing the same to the user.


Hereinafter, a process by which a node having in-memory container storage for providing shared data, as shown in FIG. 13, shares data will be described in detail with reference to FIG. 14.



FIG. 14 illustrates a user (tenant) access method that is able to improve security in such a way that, when data is shared according to the present disclosure, the data desired to be shared is separated and provided depending on the group to use the data, rather than sharing all data.


First, in response to a request from the user (tenant), the directory of the user (/sharedData/tenant) may be created in the in-memory container storage 1110a of node A, and a directory (diff) may be created and mapped under the directory of the user (/sharedData/tenant) in the container layer 1111a (upper directory). Here, deduplicated data may be used as the data of the user for file system management. The created diff directory may correspond to the container layer and correspond to data stored by the user by accessing or editing/modifying a file. Also, a work directory may be created and mapped under the directory of the user. The work directory may correspond to the user data storage area of the container layer.


Also, a lower directory (lowerdir2=/sharedData/base/File1-Link, File2-Link, . . . , FileN-Link/) located at the lowest position in the image layer 1112a is a management point that stores links to all files in shared storage 1120a, and may be set to (/sharedData/base . . . ).


In the image layer 1112a, the lower directory (lowerdir2=/sharedData/base/File1-Link, File2-Link, . . . , FileN-Link/) may be exposed to the management system such that the user is able to select a necessary file, and another lower directory (lowerdir1=/sharedData/tenantA/base/File1-Link, File2-Link) created in the image layer 1112a may be associated with the upper directory, whereby only the link information for the file selected by the user may be deployed.


Through this process, the user may view only the files selected by the user through the lower system.


Accordingly, the user may receive the file through the user directory shared with the user, and the lower directories may always remain unchanged. In other words, the lower directories are used as read-only, which may efficiently prevent the problem of writes when multiple people share data. When a change is made to a file in the lower directory, the change is written in the upper directory, whereby all of the shared files may be efficiently managed.



FIG. 15 is a view illustrating an example of the detailed structure of the in-memory container storage management module illustrated in FIG. 5.


Referring to FIG. 15, the in-memory container storage management module 1200 according to the present disclosure may include a container file system layer management module, an in-memory container storage generation management module, an in-memory container storage sharing management module, and an in-memory container storage engine management module.


The container file system layer management module may monitor the current state and running state of a container file system. Also, the container file system layer management module may manage the creation and state of the container system when in-memory container storage is used.


The in-memory container storage generation management module may create in-memory container storage when a container is configured in the form of in-memory in response to a request of a user. Here, when the in-memory container storage has been created, the container file system layer management module creates a container file system of the system.


The in-memory container storage sharing management module may create a shared file system between storage units to share an image layer in response to a request of a user and perform a task of synchronizing the shared file system. Here, link information in the image layer may be merged into a single system and synchronized.


The in-memory container storage engine management module may create and run an in-memory container storage driver of the system and monitor the state thereof.


Hereinafter, a process by which an in-memory container storage management module according to the present disclosure performs data sharing management will be described in detail with reference to FIG. 16.


First, a user (tenant) may access a system and request and select file sharing at steps S1302 and S1304.


Here, the user may be classified as a user to be provided with file sharing or a provider to provide file sharing.


Accordingly, whether the user is a provider to provide file sharing may be determined at step S1306.


When it is determined at step S1306 that the user is a user to be provided with file sharing, whether the user is the first user may be determined at step S1308.


When it is determined at step S1308 that the user is the first user, a user directory is created at step S1310, relevant directories are created at step S1312, and the entire system environment may be mounted at step S1314.


Subsequently, after moving to the lower directory of the user directory at step S1316, link information for the shared file requested by the user may be created by retrieving the same from a shared storage base at step S1318.


Also, when it is determined at step S1308 that the user is not the first user, link information for the shared file requested by the user may be created by retrieving the same from the shared storage base at step S1318 after directly moving to the lower directory of the user directory at step S1316.


Also, when it is determined at step S1306 that the user is a provider to provide file sharing, a file is uploaded by accessing the shared storage at step S1320, and a link to the shared file may be created at step S1324 after moving to the shared storage base at step S1322.



FIG. 17 is a view illustrating an integrated management structure for a virtual machine and a container based on multiple clusters in a distributed cloud system according to an embodiment of the present disclosure.


Referring to FIG. 17, a global manager node 80 may provide a function to connect existing clusters for integrated management of a virtual machine and a container between the multiple clusters.


The existing cluster may include the controller node 90 and the multiple single-node integrated management apparatuses 100 described in FIG. 9.


To this end, each of the clusters may include a network connectivity function in the existing single-cluster integrated management function.


The global manager node 80 may include an upper-level extended API server, global data storage, and a global scheduler for the integrated management.


First, the controller node 90 of each of the multiple clusters may provide a high-speed network gateway function for connection over a network and an underlying routing agent function for recognition in the cluster. The gateway and the router are management functions on the cluster. Here, a network broker may be deployed in a global manager, and the gateway and the router may be deployed in each of the clusters through the global scheduler.


The high-speed network gateway is a network connection scheme for connecting and operating the multiple clusters at high speeds, and the connection may be established using tunneling between the two networks of the controller nodes 90.


Tunneling may ensure reliable data transmission by encapsulating a payload in a tunneling section and utilizing a specific protocol. Tunneling may be applied to layers L7, L3, and L2, among the seven layers of the Internet. As the layer of the supported tunneling is lower, a lot of protocols used at upper layers may be used without change, and faster performance may be provided. In this system, two clusters may be connected using L3 tunneling. Also, the protocols used for tunneling often have low processing speeds compared to other protocols. In order to overcome this, the system may establish connection to the tunneling network by utilizing a user-level network driver (Data Plane Development Kit (DPDK)) for kernel bypass. Also, the interface between a master node and a worker node may be connected to a tunneling interface through a bridge, and may be connected to a network configured with an existing overlay network.


The network gateway may perform a multi-cluster tunneling connection function of layer L3.


A global (data) repository management function is a storage function to create high-speed shared storage by utilizing a network-based storage system using a memory-based repository and to share data by connecting the high-speed shared storage to a local shared cache, and storage in the master node may be used as the network-based shared storage.


The routing agent may be executed in all nodes, may configure a path using endpoint resources synchronized with other clusters, and may enable the connection between all of the clusters. Here, the rules of Iptable may be set. The routing agent may have the routing table of the gateway engine in order to connect to and communicate with the gateway engine.


The global manager node 80 may use a method of a global scheduler (a global integration manager+a controller+a node agent).


The global manager node 80 may obtain cluster information by accessing the local orchestrator in the master of each cluster for global management or transfer commands for creating a virtual machine and a container to the VM-container integration manager of a corresponding node.


The global manager node 80 may further include components related to a complex orchestrator.


The global manager node 80 may further include a global orchestration REST API that requests allocation of a virtual machine and a container from a user interface or a command tool.


The global manager node 80 may further include a global orchestration handler, which is a component for handling the global orchestration REST API.


The global manager node 80 may further include a request queue manager, which is a component for receiving a request to allocate a VM and a container from the global orchestration handler and storing and managing the data.


The global orchestration controller of the global manager node 80 may be a component that pulls orchestration request data from a request queue and creates and executes a global orchestration task thread.


The global manager node 80 may further include a global orchestration task thread that converts a scheduler task into a message format to be transferred to the global scheduler agent of a corresponding master node and stores the same in a task message queue.


The global manager node 80 may further include a cluster metadata repository for storing cluster-related metadata.


The global manager node 80 may further include a task message queue, which is a repository for storing orchestration task messages between a command executor and a cluster.


The global manager node 80 may further include a global orchestration agent, which is a component that receives an orchestration task message corresponding thereto from the task message queue of the master node of the cluster and calls a REST API.


The global manager node 80 may further include a cloud scheduler, which is a component that detects undeployed virtual machines and containers and selects the worker node to execute the container.


The distributed cloud system and the apparatus for integrated management of virtualization of computer resources according to an embodiment of the present disclosure may provide services by integrating a virtual machine and a container in a cloud-computing environment.


The distributed cloud system and the apparatus for integrated management of virtualization of computer resources according to an embodiment of the present disclosure may perform management such that a virtual machine and a container can be used in an integrated manner.


The distributed cloud system and the apparatus for integrated management of virtualization of computer resources according to an embodiment of the present disclosure may isolate various applications or services. An integrated environment for virtual machines and containers may improve security and stability by isolating various applications or services from each other.


The distributed cloud system and the apparatus for integrated management of virtualization of computer resources according to an embodiment of the present disclosure may conserve resources. The integrated environment for virtual machines and containers may conserve resources by sharing the same underlying hardware.


The distributed cloud system and the apparatus for integrated management of virtualization of computer resources according to an embodiment of the present disclosure may simplify management. The integrated environment for virtual machines and containers may simplify management by providing a consistent method of deploying and managing applications.


The distributed cloud system and the apparatus for integrated management of virtualization of computer resources according to an embodiment of the present disclosure may provide easy adoption of legacy environments. The integrated environment for virtual machines and containers may facilitate adoption of legacy virtual machines or containers.


The distributed cloud system and the apparatus for integrated management of virtualization of computer resources according to an embodiment of the present disclosure may provide a high-performance architecture for efficient collaboration between clusters.


The distributed cloud system and the apparatus for integrated management of virtualization of computer resources according to an embodiment of the present disclosure may configure a high-performance container using a memory-based storage device for improvement of the container and a global cache for data linkage between containers.


The distributed cloud system and the apparatus for integrated management of virtualization of computer resources according to an embodiment of the present disclosure may provide a high-speed network connection between multiple clusters.


The distributed cloud system and the apparatus for integrated management of virtualization of computer resources according to an embodiment of the present disclosure may construct a tunneling-based high-speed network for collaborative services between clusters.


The distributed cloud system and the apparatus for integrated management of virtualization of computer resources according to an embodiment of the present disclosure may provide optimal management technology for clusters for integrated management of virtual machines and containers over interconnected networks.



FIG. 18 is a flowchart illustrating a method for integrated management of virtualization of computer resources according to an embodiment of the present disclosure.


Referring to FIG. 18, in the method for integrated management of virtualization of computer resources according to an embodiment of the present disclosure, first, a request of a CSC may be received at step S210.


That is, at step S210, a request (command) may be received from a user (the interface of the CSC).


Here, at step S210, the computing node that uses a computing resource may receive the request of the user.


In the method for integrated management of virtualization of computer resources according to an embodiment of the present disclosure, the request of the CSC may be classified at step S220.


That is, at step S220, the request (command) of the user may be transmitted to any one of a virtual machine management handler 120 and a container management handler 130 by classifying the request depending on whether it corresponds to VM management or container management.


Here, at step S220, the received request of the user may be classified depending on the type of the virtualization machine of the computing resource.


Here, at step S220, the desired state of a VMI (which is defined in the VMI) may be continuously compared with the actual state of a hypervisor in order to adjust the desired state and the actual state.


Here, at step S220, when there is a discrepancy between the desired state and the actual state, a necessary measure may be taken to adjust the discrepancy.


Here, at step S220, a task such as creating, starting, stopping, updating, or deleting a VM may be performed based on a predefined VMI configuration.


Here, at step S220, interaction with other components for integrated management of a virtual machine and a container is not independently performed, and a VM task may be performed in collaboration with the components for integrated management of a virtual machine and a container, such as the handlers 120 and 130 and an executor 140.


Here, at step S220, in order to start a new VM, a guideline for managing the actual VM in the hypervisor may be sent to the handlers 120 and 130 while coordinating with a launcher.


Here, at step S220, the connection between a virtual machine instance (VMI) and a Kubernetes pod may be facilitated through pod connection processing. Accordingly, the pod may smoothly integrate virtualized workloads by interacting with a specific VM.


Here, at step S220, a VM-container integration controller 110 may receive the request of the CSC and transmit the same to the virtual machine and container executor 140.


Here, at step S220, the initial specifications of a virtual machine and a container may be received, and a signal to start any one of the virtual machine and the container corresponding to each execution program may be sent.


Here, at step S220, the lifecycle of the virtual machine and container and communication thereof with the host OS, such as network traffic forwarding, may be managed.


Here, at step S220, Virtual Machine Instances (VMIs) may be managed in the integrated management of virtual machines and containers.


Here, at step S220, VMI lifecycle management may be performed.


That is, at step S220, a container-based virtualization machine and a hybrid virtualization machine may be managed in an integrated manner.


The container-based virtualization machine may virtualize and provide an application that is containerized by being installed in a first virtualization machine that virtualizes the computing resource.


The hybrid virtualization machine may virtualize and provide an application that is containerized by being installed in a second virtualization machine that virtualizes the computing resource and/or virtualize and provide an application containerized on the computing resource.


Here, at step S220, the request of the user may be received and provided to a first interface of the container-based virtualization machine and a second interface of the hybrid virtualization machine, and virtualization of the computing node may be managed in an integrated manner.


Here, at step S220, the received request of the user may be classified depending on the type of the virtualization machine, and the request of the user, which is classified depending on the type of the virtualization machine, may be provided to the first interface or the second interface.


Here, the container-based virtualization machine and the hybrid virtualization machine may be installed in an OS kernel in the computing node.


Here, at step S220, an image management function for the container-based virtualization machine or the hybrid virtualization machine may be provided depending on the image management function installed in the OS kernel in the computing node.


Here, at step S220, the container-based virtualization machine or the hybrid virtualization machine may be managed based on a library on an OS in the computing node or through a software daemon in the computing node.


Also, in the method for integrated management of virtualization of computer resources according to an embodiment of the present disclosure, the request of the CSC may be executed at step S230.


That is, at step S230, when a VMI is created through an API, a VMI specification may be received, and a VM may be executed by sending a signal to an execution program, which is another component.


Here, at step S230, interaction with the underlying hypervisor is performed using a library, and the VM may be configured and started based on detailed information about the VMI.


Here, at step S230, the state of the VM that is being executed may be continuously monitored.


Here, at step S230, a signal (e.g., a crash) may be received from the VM, and the VMI state may be updated in the API based thereon.


Here, at step S230, the VM may be safely stopped or terminated when there is an instruction.


Here, at step S230, the controller and the VM container may be bridged.


Here, at step S230, a communication bridge between the cluster and a guest VM may be established.


Here, at step S230, functions, such as real-time migration, console access, and network traffic transfer between the VM and the container, may be used.


Here, at step S230, when the API (e.g., resource allocation) is changed by the desired state of the VMI, agent update by which the change is converted into adjustment of the VM itself and by which whether the adjustment matches the desired configuration is checked may be performed.


Here, at step S230, an autonomous heartbeat mechanism is implemented, whereby unresponsive nodes in the cluster may be detected.


Here, at step S230, a security function may be designed as a single authorized component in the integrated management of virtual machines and containers.


Here, at step S230, sensitive tasks such as VM creation and configuration that require root access may be processed.


Here, at step S230, using the software installed in the OS kernel, the virtual machine and the container may be executed depending on the request of the CSC.


Here, at step S230, various types of interface methods to be executed to start the virtual machine and the container may be supported.


Here, at step S230, in order to provide cgroup and namespace, a guideline may be sent to a controller agent through an API when the Virtual Machine Instance (VMI) is created. According to the guideline, an instance may be created specifically for the corresponding VMI at step S230. In the instance, the underlying container may execute an execution program.


Here, at step S230, control groups (cgroup) and namespace required for a VM process may be provided as the primary role of a launch manager. This may be a primary kernel mechanism for isolating and controlling resources (CPU, memory, etc.) for individual VMs and network visibility.


Here, at step S230, the initial environment of the VM may be configured.


Here, at step S230, the initial VM environment (cgroup, namespace, configuration) may be set.


Here, step S230 may be performed in the VM instance itself. On the other hand, the handlers 120 and 130 may interact directly with the hypervisor, and the controller 110 may supervise VMI management at a higher level in the API.


Here, at step S230, based on interaction with libraryt, the execution program may manage VM creation and configuration in the underlying hypervisor (e.g., KVM) through the library.


Here, at step S230, the resources and configuration of the VM may be defined using the VMI specification.


Here, at step S230, in-memory-based storage and an in-memory-based container structure may be used in order to use data and storage between the container and the VM.


Among the usage models of virtual machines and containers, models configured in a single node may include a host-based virtualization model and a container-based virtualization model.


In the case of a hybrid virtualization model, a virtual machine is considered a node and may be connected to another virtual machine having a container.


Here, at step S230, an image manager for storing and managing data for image management is present for both the container and the virtual machine, and the performance of the virtual machine and container may be improved using various types of high-performance storage (memory, NVMe, SSD, federation storage, etc.).


Here, at step S230, an in-memory-based container storage system may be used as a repository for configuring a virtual machine image or a container file system (an additional function for image management).



FIG. 19 is a block diagram illustrating a computer system according to an embodiment of the present disclosure.


Referring to FIG. 19, the apparatus for integrated management of virtualization of computer resources according to an embodiment of the present disclosure may be implemented in a computer system 1100 including a computer-readable recording medium. As illustrated in FIG. 19, the computer system 1100 may include one or more processors 1110, memory 1130, a user-interface input device 1140, a user-interface output device 1150, and storage 1160, which communicate with each other via a bus 1120. Also, the computer system 1100 may further include a network interface 1170 connected to a network 1180. The processor 1110 may be a central processing unit or a semiconductor device for executing processing instructions stored in the memory 1130 or the storage 1160. The memory 1130 and the storage 1160 may be any of various types of volatile or nonvolatile storage media. For example, the memory may include ROM 1131 or RAM 1132.


The apparatus for integrated management of virtualization of computer resources according to an embodiment of the present disclosure one or more processors 1110 and memory 1130 for storing at least one program executed by the one or more processors 1110, the processor may receive a user request from a user, classify the received user request depending on virtualization models for the one or more nodes, provide the classified user request to at least one interface of the virtualization models and perform an integration manager of the virtualization models.


Here, the one or more computing nodes include a first computing node providing a first virtualization model, wherein the first virtualization model provides a container running on a kernel of an operating system (OS) of the first computing node and a virtual machine running with a hypervisor on the kernel, and wherein the integration manager performs an integrated management of the virtualization models.


Here, the one or more computing nodes include a second computing node and a third computing node, the second and third computing nodes providing a second virtualization model, wherein the second computing node provides a container running on a kernel of an operating system (OS) of the second computing node, wherein the third computing node provides a virtual machine running with a hypervisor on a kernel of the third computing node, and wherein the integration manager performs an integrated management of the virtualization models.


Here, the one or more computing nodes include a fourth computing node providing a third virtualization model, wherein the third virtualization model provides a container within a virtual machine, the virtual machine running with a hypervisor on a kernel of an operating system (OS) of the fourth computing node, and wherein the integration manager performs an integrated management of the virtualization models.


Also, in order to accomplish the above objects, a storage medium for storing a program for integrated management of virtualization of computer resources according to an embodiment of the present disclosure stores a computer-executable program for integrated management of virtualization of computer resources. The computer-executable program executes instructions including receiving, by a computing node that uses a computing resource, a user request; classifying the received user request depending on the type of the virtualization machine of the computing resource; and providing the classified user request to a first interface of a container-based virtualization machine and a second interface of a hybrid virtualization machine and performing integrated management of virtualization the computing node, the computing node including the container-based virtualization machine, which virtualizes and provides an application that is containerized by being installed in a first virtualization machine that virtualizes the computing resource, and the hybrid virtualization machine, which virtualizes and provides an application that is containerized by being installed in a second virtualization machine that virtualizes the computing resource and/or virtualizes and provides an application containerized on the computing resource.


The present disclosure may provide an integrated management method and structure for integrated management of containers and virtual machines and single-node and multi-node scale-up in a distributed cloud.


Also, the present disclosure may improve security and stability by isolating various applications or services from each other.


Also, the present disclosure may conserve resources by sharing the same underlying hardware.


Also, the present disclosure may simplify management by providing a consistent method of deploying and managing applications.


Also, the present disclosure may facilitate adoption of legacy virtual machines or containers.


Also, the present disclosure may provide a high-performance architecture for efficient collaboration between clusters.


Also, the present disclosure may improve efficiency of containers for high-performance containers and data linkage between containers.


Also, the present disclosure may configure a high-speed network for collaborative services between clusters.


Also, the present disclosure may provide optimal management technology for clusters for integrated management of virtual machines and containers over interconnected networks.


As described above, the apparatus, method, and storage medium for integrated management of virtualization of computer resources according to the present disclosure are not limitedly applied to the configurations and operations of the above-described embodiments, but all or some of the embodiments may be selectively combined and configured, so the embodiments may be modified in various ways.

Claims
  • 1. An apparatus for integrated management of virtualization of computer resources, comprising: a memory configured to store data; andone or more computing nodes to use computing resources including the memory and a processor processing the data,wherein the processor configured to:receive a user request from a user;classify the received user request depending on virtualization models for the one or more nodes;provide the classified user request to at least one interface of the virtualization models; andperform an integration manager of the virtualization models.
  • 2. The apparatus of claim 1, wherein the one or more computing nodes include a first computing node providing a first virtualization model,wherein the first virtualization model provides a container running on a kernel of an operating system (OS) of the first computing node and a virtual machine running with a hypervisor on the kernel, andwherein the integration manager performs an integrated management of the virtualization models.
  • 3. The apparatus of claim 1, wherein the one or more computing nodes include a second computing node and a third computing node, the second and third computing nodes providing a second virtualization model,wherein the second computing node provides a container running on a kernel of an operating system (OS) of the second computing node,wherein the third computing node provides a virtual machine running with a hypervisor on a kernel of the third computing node, andwherein the integration manager performs an integrated management of the virtualization models.
  • 4. The apparatus of claim 1, wherein the one or more computing nodes include a fourth computing node providing a third virtualization model,wherein the third virtualization model provides a container within a virtual machine, the virtual machine running with a hypervisor on a kernel of an operating system (OS) of the fourth computing node, andwherein the integration manager performs an integrated management of the virtualization models.
  • 5. A method for integrated management of virtualization models of computer resources, comprising: receiving a user request from a user;classifying the received user request depending on virtualization models for one or more nodes in the computer resources;providing the classified user request to at least one interface of the virtualization models; andperforming an integration manager of the virtualization models.
  • 6. The method of claim 5, wherein the one or more computing nodes include a first computing node providing a first virtualization model,wherein the first virtualization model provides a container running on a kernel of an operating system (OS) of the first computing node and a virtual machine running with a hypervisor on the kernel, andwherein the integration manager performs an integrated management of the virtualization models.
  • 7. The method of claim 5, wherein the one or more computing nodes include a second computing node and a third computing node, the second and third computing nodes providing a second virtualization model,wherein the second computing node provides a container running on a kernel of an operating system (OS) of the second computing node,wherein the third computing node provides a virtual machine running with a hypervisor on a kernel of the third computing node, andwherein the integration manager performs an integrated management of the virtualization models.
  • 8. The method of claim 5, wherein the one or more computing nodes include a fourth computing node providing a third virtualization model,wherein the third virtualization model provides a container within a virtual machine, the virtual machine running with a hypervisor on a kernel of an operating system (OS) of the fourth computing node, andwherein the integration manager performs an integrated management of the virtualization models.
  • 9. A non-transitory storage medium for storing a computer-executable program for integrated management of virtualization of computer resources, wherein the computer-executable program executes instructions including:receiving a user request from a user;classifying the received user request depending on a type of virtualization models;providing the classified user request to at least one interface of the virtualization models; andperforming an integration manager of the virtualization models.
Priority Claims (3)
Number Date Country Kind
10-2023-0133976 Oct 2023 KR national
10-2023-021432 Feb 2024 KR national
10-2024-0104565 Aug 2024 KR national