Modern enterprise networks generally require several network services to be placed on the edge of their network. These services have been typically implemented as separate, dedicated network appliances, each independently performing its own functions. Examples of such network appliances are shown in
In modern networks that need to service dynamically changing requirements, such discrete hardwiring of network appliances is unacceptable because doing so does not allow the enterprise to rapidly adapt to changing network conditions.
Recently, network appliances have been virtualized and implemented as independent virtual machines on one or more general purpose servers, as depicted in
While virtualized servers may remove physical restrictions, it is still difficult to deploy and manage them on a dynamic basis. Virtualized servers require complex deployment and management systems, typically provided by the vendor, that facilitate hosting of virtualized services. Resource management, scheduling of the services, and operational maintenance of such systems, all using proprietary management systems, is very complex, thereby diminishing advantages gained by virtualization. Even virtualized servers cannot selectively specify which set of services are to be applied to a specific network flow.
Currently-deployed systems only support forwarding all traffic from a network port to the service (i.e., without classification), and all traffic from the service to a group of network ports. This does not give enough control to the operator to segment the access to critical resources, which leaves the enterprise customer without viable options for building an enterprise network that can dynamically adapt to its consumer requirements.
Another challenge with virtualization systems is a preference to offer the services through a standards-based management ecosystem. Standards such as ETSI/MANO are too complex to efficiently implement (for the vendors) and to manage (for the customers).
Modern enterprise requires application of a different set of network services to flows depending on the corporate policies. Such flexibility does not exist in prior art appliances or virtual servers. The described embodiments support a simple ecosystem that allows dynamic insertion of network services that can be managed efficiently, without requiring a complex set of management paradigms. The described embodiments operate to create and use network resources when required, and give the resources away when the work is done
The described embodiments enable enterprise customers to insert network services into their forwarding network path in a very simple elegant fashion as desired, and to use policy to give full control to the network operator to guide their traffic through the desired set of network services (referred to herein as “micro-segmentation”).
The described embodiments further provide a fully programmable system through a declarative programming, using a standards-based approach and a standard mechanism to collect services status and statistics.
The described embodiments may further provide an ability to add different encapsulation between each service on a single service-list. This may be the same encapsulation or different depending on the needs of the services and how the services are defined. Eg encapsulations used are. NSH, VLAN, DiffServ.
The described embodiments may further provide an ability to apply a policy whose services may span across the network through multiple network appliances.
The described embodiments may further provide an ability to insert new services into an existing policy dynamically.
The described embodiments may further provide an ability to remove existing services from an existing policy and continue to use the policy.
The described embodiments may further provide an ability to reuse the same policy for multiple flows (Multiple selectors to match flows).
The described embodiments may further provide an ability to reclassify the packet in the midst of the processing. This is the diagram that shows two policies.
The described embodiments may further provide an ability to share a network service across multiple policies. The same service can be part of many policy definitions.
The described embodiments may further provide an ability to use Kubernetes containers management to create multiple instances of the same service and specify which policies can use which instance of the service.
The described embodiments may further provide an ability to dynamically change a resource requirement for each network service (e.g., through Kubernetes and containers). This can be done dynamically without bringing the service down and user will not even notice. We use Kubernetes to create the new instance of the service with the new resource requirement and then switch to using it in a hitless fashion using native Kubernetes tools.
In one aspect, the invention may be a method of managing and deploying network resources, comprising employing a container management tool in a network that implements resources through one or more containers, and engaging a policy extension with the container management tool, the policy extension configured to define and enforce an intent of a user in a forwarding plane of the network.
An embodiment may further comprise using a declarative programming language to convey the intent of the user to the policy extension. The container management tool may be
Kubernetes. The policy extension may define policy as a Custom Resource Definition (CRD).
The container may comprise a microservice that is packaged along with associated dependencies and configurations.
The method may further comprise defining, by the user, (i) at least one network resource, (ii) at least one service, (iii) at least one policy, and delivering network data traffic to the at least one service according to the at least one policy. The at least one network resource may be defined on Open vSwitch. The method may further comprise programming match selectors as match rules in the forwarding plane of the network, the match selectors being programmed according to the intent of the user. The method may further comprise forwarding a data packet at an applied network port through a service function chain when the data packet matches the match selectors. The at least one policy may cause the forwarding plane to configure an action list as a sequential service function chain in a data path.
In another aspect, the invention may be a system for managing and deploying network resources in a network that implements resources through one or more containers, comprising a processor and a memory with computer code instructions stored thereon. The memory may be operatively coupled to the processor such that, when executed by the processor, the computer code instructions cause the system to engage a policy extension with a container management tool. The policy extension may be configured to define and enforce an intent of a user in a forwarding plane of the network. The computer code instructions may further cause the system to use a declarative programming language to convey the intent of the user to the policy extension.
The container management tool may be Kubernetes. The policy extension may define policy as a Custom Resource Definition (CRD). The container may comprise a microservice that is packaged along with associated dependencies and configurations. The computer code instructions, when executed by the processor, may further cause the system to define at least one network resource based on input from the user, define, by the user, at least one service, define, by the user, at least one policy, and deliver network data traffic to the at least one service according to the at least one policy.
The at least one network resource may be defined on Open vSwitch. The system may further comprise match selectors programmed as match rules in the forwarding plane of the network, the match selectors being programmed according to the intent of the user. The system may further comprise a data packet forwarded at an applied network port through a service function chain when the data packet matches the match selectors. The at least one policy may cause the forwarding plane to configure an action list as a sequential service function chain in a data path.
In another aspect, the invention may comprise a non-transitory computer-readable medium with computer code instruction stored thereon. The computer code instructions, when executed by a processor, may cause a system to engage a policy extension with a container management tool. The policy extension may be configured to define and enforce an intent of a user in a forwarding plane of the network. The computer code instructions may further cause the system to using a declarative programming language to convey the intent of the user to the policy extension.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
The foregoing will be apparent from the following more particular description of example embodiments, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments.
A description of example embodiments follows.
Present-day applications for hosting services are most often implemented using containers, rather than virtual machines. Containers are microservices packaged along with their dependencies and configurations. Containers provide an efficient framework for hosting microservices. This allows for implementation of each networking service as a microservice in a container.
The described embodiments utilize a container management tool for deploying and managing containers. An example embodiment uses a container management tool known as Kubernetes, although other such control-plane management tools may alternatively be used.
Kubernetes (pronounced “koo-ber-net-ees”—also known in the art as “k8s” or “k-eights”) is open-source software for deploying and managing containers at scale. Kubernetes may be used as a control plane for managing containerized virtual network services. Kubernetes eliminates the need for complex management of scheduling and resource management for containerized services.
Modern enterprise requires dynamic and selective starting and deploying network services on a need basis. The network service is no longer required, however, once the network service is completed. The embodiments described herein leverage Kubernetes to dynamically create and delete network services, thereby saving resources that would otherwise not be available for other purposes. The example embodiments use Kubernetes as the framework to facilitate dynamic insertion of containerized services, based on customer needs, using declarative programming. The user provides the desired services to be instantiated on a collection of network devices across the network using a declarative mechanism (resource definitions) that Kubernetes understands and accepts for management. The resource definitions may include the resource location, and service requirements of the resource such as CPU, memory, networking, and filesystem elements. Kubernetes may also handle service expansions, re-orchestrations, and other such network services.
Ideally, a user (e.g., network operator) should be able to create a set of network services dynamically. The user would define a set of rules to indicate how data must pass through the set of network services along with other existing services. The user may wish to identify what network service must be applied to a particular set of network flows, and to be able to define the set of services that must be applied, including the sequence of the network services. The user may create a policy (via the Kubernetes policy extension framework defined herein. The policy defines a set of attributes that must match the flows. All flows that match these network attributes will be mandatorily forced to go through the sequence of network services so that those services can take appropriate action. In an example case where an enterprise is running traditional Ethernet and IPv4/IPv6 networks, a policy definition can support matching attributes in any of the Ethernet, IPv4, IPv6, TCP, and/or UDP headers. Example fields may include Ethernet Type, VLAN ID, Source IPv4 or IPv6 address, Destination IPv4 or IPv6address, IPv4 protocol, TCP Source Port, TCP Destination Port, TCP Flags, UDP Source Port, and UDP Destination Port. When the services are no longer needed, the user can immediately decommission all those services by using native Kubernetes container operations and remove the policy. Doing so will take seconds/minutes instead of days/months, and all the compute resources are freed and given back to a common pool for other allocations if required. The policy definition can apply to a set of network services that can span the network, and does not need to be restricted to a single device.
As set forth above, Kubernetes is a control plane management tool that can manage network resources and deployments of those network resources. The embodiments described herein expand the capability of Kubernetes 202 (or other container management tool) by natively implementing an extension to Kubernetes 202, which defines policy as a Custom Resource Definition (CRD). See, for example, the example embodiment shown in
The policy extension 204 of the described embodiments also allows for programmatically stitching of different service function chains based on the packet flow behavior. The described embodiments allow services to apply their own rules to the packets that might change the behavior of the packet processing, which allows a different policy to be applied to the modified packet through a different set of services. This rule application is done completely programmatically, without any user intervention.
The policy extension 204 of the described embodiments also allows for different encapsulation through each service function in the chain of network services. These policies provide the ability to match the traffic at the lowest granularity and dictate how they must be processed through their dynamically created service function chain. The described embodiments provide for the enterprise operator to match flow-based traffic through the desired set of service functions.
The Kubernetes policy extension 204 of the described embodiments allows the customer to deploy a system that doesn't require a complex management orchestration system and is extensively programmable, which allows customers to define security or network policies dynamically based on the current state of their network. Although the example embodiments utilize Kubernetes, it should be understood that other container management tools may alternatively be used.
The described embodiments give full control for the enterprise operator to dynamically insert services into the network at any place and direct appropriately the traffic through the services and allows extending the traffic to be applied through a sequence of services as desired by the user intent. The described embodiments allow for this creation of service function chain to span across multiple network nodes and is not limited to a single node.
The described embodiments facilitate simplified service insertion by providing the full control to the user. The user may define a policy to map traffic to the new service or insert the new service into an existing service path (action List), which causes the new service is applied on flows that match the policy. Adding the service in a service function path also triggers the creation of the service using native Kubernetes mechanism (On Demand resource allocation trigger).
The job of the policy is only to get the traffic to and/or from the service in the service function path. When action is removed from all policies, the service container can be deactivated thereby releasing the compute resources (dynamic resource release triggers).
The following illustrates an example embodiment according to the invention. This example embodiment merely illustrative and is not intended to be limiting.
The user defines Network resources such as network ports using Kubernetes API. These resources are normally created on Open vSwitch using Kubernetes native functions. These network resources are then used to apply traffic arriving/departing on these resources to the dynamically created service function chains. User defines a service.
The user may define a service via a standards-based Helm chart. A Helm chart defines the characteristics required to deploy a micro-service as a container on the target platform. On instantiating the Helm chart, the service specifies the necessary resources, including, for example, building its service ports on Open vSwitch, which is an open-source implementation of a distributed virtual multilayer switch. The service also may allocate other system resources such as CPU, memory and filesystem space. The service advertises the action(s) associated with the service to the control plane. User defines a policy.
The user may define a policy using the Custom Resource Definition provided as an extension to Kubernetes API Server. A policy may consist of the following:
The policy match selectors are programmed to match rules in the forwarding plane as desired by the user intent of the policy. The data plane programs the action list as a sequential service function chain in the data path. When traffic arrives on the applied network port and matches the Match selectors, the data plane forwards the packet through the service function chain. The forwarding through the service function chain uses any of the possible standard networking encapsulations, e.g., (i) Network Service Header (NSH), (ii) VLAN, (iii) Differentiated Service Code Point (DSCP) Bits in IP header. Multiple encapsulation types may be used on the same policy. When traffic exits the service function chain, it is either discarded or forwarded to another network or to another service function chain. This decision is based on the actions taken in the service functions in the chain or defined by the user intent. The user has full control to automatically (i) start the resources, (ii) allocate compute and network resources natively on-demand, (iii) redirect desired traffic flows to the service(s), and (iv) apply the policy on desired network ports or functions.
To summarize, the example embodiments described herein (i) extends Kubernetes API to support Policies, (ii) delivers on a programmable and dynamic Service Function Chain, (iii) applies policies at Flow level to redirect into the Service functions, (iv) uses Kubernetes (or other container management tool) to program a forwarding plane in Open vSwitch, (v) facilitates dynamic service insertion as Containers and programs the forwarding plane automatically using native Kubernetes function (instead of a complex VM management), (vi) supports the ability to chain a set of Service Function Chains, (vi) supports the ability to mix and match different encapsulations for each Service in a Service function chain.
A summary of packet migration through a network resource may be summarized as follows:
While example embodiments have been particularly shown and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the embodiments encompassed by the appended claims.
This application claims the benefit of U.S. Provisional Application No. 63/387,161, filed on Dec. 13, 2022. The entire teachings of the above application are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63387161 | Dec 2022 | US |