A computing network such as a software defined network (SDN) can include resources such as processing and/or memory resources that can be spread across multiple logical components. A SDN can provide a centralized framework in which forwarding of network packets is disassociated from routing of network packets. For example, a control plane can provide centralized management of network packet routing in a SDN, while a data plane separate from the control plane can provide management of forwarding of network packets.
Software defined networks (SDNs) such as information technology infrastructures can include physical computing components (e.g., processing resources, network hardware components, and/or computer components, etc.) as well as memory resources that can store instructions executable by the physical computing components and/or network components to facilitate operation of the SDN. As an example, a SDN can operate as a host for collections of virtualized resources that may be spread across one or more logical components. These virtual resources may facilitate a networked relationship among each other as part of operation of the SDN.
In some approaches, the resources (e.g., the processing resources and/or memory resources) and relationships between the physical computing components can be manually created or orchestrated through execution of instructions. Such resources and relationships can be managed through one or more managed services and may be configured distinctly.
A switching sub-system may be utilized to manage networked data flows that arise from the relationships described above. Examples of switching sub-systems that allow for virtualization in a SDN can include VIRTUAL CONNECT® or other virtual network fabrics. Tasks such as discovering network resources, capturing runtime properties of the SDN, and/or facilitating high data rate transfer of information through the SDN can be provided by such switching sub-systems. However, in some approaches, manual configuration of the relationships can result in sub-optimal performance of the SDN due to their static nature in a dynamically evolving infrastructure, can be costly and/or time consuming, and/or can be prone to errors introduced during manual configuration processes. Further, in some approaches, SDN scalability may be difficult due to manual configuration of the switching sub-systems and/or relationships. This can be further exacerbated in SDNs, which can be characterized by dynamic allocation of resources and/or dynamic reconfiguration of the relationships.
In contrast, examples herein may allow for discovery and/or processing of flows in a SDN or in portions thereof. For example, discovery and/or processing of flows in a switching sub-system that is part of the SDN may be performed in accordance with the present disclosure. In some examples, network parameters and/or infrastructure parameters may be altered or reconfigured based on runtime behaviors of the SDN. In addition, network components such as switches, routers, virtual machines, hubs, processing resources, data stores, etc. may be characterized and/or dynamically assigned or allocated at runtime. Finally, in some examples, by managing flows in the SDN as described herein, the SDN may be monitored and/or managed in a more efficient way as compared to some approaches.
In a switching sub-system, data may ingress and egress at a rate on the order of Gigabits per second (Gbps). As a result, a switching sub-system can learn and/or un-learn 100s of end points over short periods of time. In a SDN, end points can exist at Layer 2 (L2), Layer 3 (L3), and higher layers of the open systems interconnection (OSI) model. Discovery of such endpoints at various layers of the OSI model and/or recognizing relationships dynamically and/or establishing contexts between endpoints may be complex, especially in SDNs in which resources and relationships may be created and/or destroyed rapidly in a dynamic manner. Further, because network packets can be stateless, identifying and/or introducing states such as L2 or L3 flows between two or more endpoints can include inspecting multiple packets (streams) and/or object vectors made up of endpoint statistics or telemetry data.
As mentioned above, workloads in a SDN can be moved around (e.g., dynamically allocated, reallocated, or destroyed), which can alter flow composition through the SDN. For example, as resources and/or relationships in a SDN are redefined or moved around, flows can be dynamically altered, rendered obsolete, or otherwise redefined. In some examples, packet priorities can be defined in packets to reduce delays or losses of flows in the SDN. As used herein, a “flow” is an object that characterizes a relationship between two endpoints in a SDN. Non-limiting examples of flows include objects that characterize a relationship between two media access control (MAC) addresses, internet protocol (IP) addresses, secure socket shells (SSHs), file transfer protocol (FTP) transport protocol ports, hypertext transfer protocols (HTTP) transport protocol ports, etc.
A flow can include properties such as a received packet count, a transmitted packet count, a count of jumbo-sized frames, and/or endpoint movement details, among other properties. In some examples, a flow may exist for an amount of time that the flow is used (e.g., flows may be dynamically generated and/or destroyed). Creation (or destruction) of flows can be based on meta data associated with packets that traverse the SDN. In some examples, flows may be monitored to enforce transmission and/or receipt rates (or limits), to ensure that certain classes of traffic are constrained to certain pathways, to ensure that certain classes of traffic are barred from certain pathways, and/or to reallocate flows in the SDN, among others.
Examples of the disclosure include apparatuses, methods, and systems related to flow rules. In some examples, a method may include generating a plurality of rules corresponding to respective flows associated with a computing network. The method can further include determining, based on application of flow rules, whether data corresponding to the respective flows is to be stored by a switching sub-system of the network. In some examples, the method can include taking an action using the switching sub-system in response to the determination.
The processing resource(s) 104 can include hardware, circuitry, and/or logic that can be configured to execute instructions (e.g., computer code, software, machine code, etc.) to perform tasks and/or functions to facilitate configuration entity ranking as described in more detail herein.
The flow composer component 102 can include hardware, circuitry, and/or logic that can be configured to execute instructions (e.g., computer code, software, machine code, etc.) to perform tasks and/or functions to generate, categorize, prioritize, and/or assign flow rules as described in more detail herein. In some examples, the flow composer component 102 can be deployed (e.g., physically disposed on) on a switching sub-system such as switching sub-system such as switching sub-system 207 illustrated in
As used herein, a “control plane” can refer to, for example, a part of a switch or router architecture that is concerned with computing a network topology and/or information in a routing table that corresponds to incoming packet traffic. In some examples, the control plane functions on a central processing unit of a computing system. As used herein, a “data plane” can refer to, for example, a part of a switch or router architecture that decides what to do with packets arriving on an inbound interface. In some examples, the data plane can include a data structure (e.g., a flow rule data structure 211 illustrated in
In some examples, data plane 209 operations can be performed on the data structure at line rate, while control plane 208 operations can offer higher flexibility than data plane operations, but a lower rate. Entries in the data structure corresponding to the data plane 209 can be system defined and/or user defined. Examples of entries that can be stored in the data structure include exact match tables, ternary content-addressable memory tables, etc., as described in more detail in connection with
The data plane 209 can collect and/or process network packets against a set of rules. The data plane 209 can cause the network packets to be delivered to the control plane 208 at a particular time, such as at a time of flow discovery. The control plane 208 can identify and/or manage flows, as described in more detail in connection with
The memory resource(s) 206 can include volatile (e.g., dynamic random-access memory, static random-access memory, etc.) memory and/or non-volatile (e.g., one-time programmable memory, hard disk(s), solid state drive(s), optical discs, etc.). In some examples, the processing resource(s) 204 can execute the instructions stored by the memory resource(s) 206 to cause the flow composer component 202 to perform operations involving flow rules, as supported by the disclosure.
As used herein, a “data structure” can, for example, refer to a data organization, management, and/or storage format that can enable access and/or modification to data stored therein. A data structure can comprise a collection of data values, relationships between the data values, and/or functions that can operate on the data values. Non-limiting examples of data structures can include tables, arrays, linked lists, records, unions, graphs, trees, etc.
The flow rule data structure 211 shown in
For example, the flow rules can include rules defining what end points in the network are associated with different packets in the network. Example rules can include rules governing the behavior of packets associated with particular VLANs, source MAC addresses, destination MAC addresses, internet protocols, transmission control protocols, etc., as described in more detail in connection with
The flow rule prioritizer 312 can be a queue, register, or logic that can re-arrange an order in which the flow rules 313-1, . . . , 313-N can be executed. In some examples, the flow rule prioritizer 312 can operate as a packet 314 priority queue. In some examples, the flow rule prioritizer 312 can be configured to re-arrange application of the flow rules 313-1, . . . , 313-N in response to instructions received from the flow composer component 302.
The flow rules 313-1, . . . , 313-N of the data plane 309 can be stored in a flow rule data structure such as flow rule data structure 211 illustrated in
An example listing of exact-match flow rules that can be included in the flow rules 313-1, . . . , 313-N follows. It is, however noted that the example listing of flow rules below is not limiting and flow rules can be added to the list, removed from the list, and/or performed in a different order than listed below. For example, the flow rule prioritizer 312 can operate to change the order of the flow rules, as described in more detail below.
1. Copy a packet to the switching sub-system 307 (e.g., to the control plane 308) if the packet is associated with a particular VLAN.
2. Copy a packet to the switching sub-system 307 if the packet is associated with the particular VLAN and has a first source MAC address associated therewith.
3. Don't copy a packet to the switching sub-system 307 if the packet is associated with the particular VLAN and has a first source MAC address associated therewith.
4. Copy a packet to the switching sub-system 307 if the packet is associated with the particular VLAN and has a second source MAC address associated therewith.
5. Don't copy a packet to the switching sub-system 307 if the packet is associated with the particular VLAN and has a second source MAC address associated therewith.
6. Copy a packet to the switching sub-system 307 if the packet is associated with the particular VLAN and has a first source MAC address and a second destination MAC address associated therewith.
7. Don't copy a packet to the switching sub-system 307 if the packet is associated with the particular VLAN and has a first source MAC address and a second destination MAC address associated therewith.
8. Don't copy a packet to the switching sub-system 307 if the packet is associated with the particular VLAN and has a second source MAC address and a first destination MAC address associated therewith.
9. Copy a packet to the switching sub-system 307 if the packet is associated with a first source IP, a second destination IP, and TCP transport protocol type and/or TCP destination port equal to a well-known value for an SSH endpoint value.
10. Don't copy a packet to the switching sub-system 307 if the packet is associated with a first source IP, a second destination IP, and TCP transport protocol type and/or first TCP destination port equal to a well know SSH endpoint and second TCP source port equal to a well-known SSH endpoint value.
11. Don't copy a packet to the switching sub-system 307 if the packet is associated with a first source IP, a second destination IP, and UDP protocol type.
12. Don't copy a packet to the switching sub-system 307 if the packet is associated with a first UDP source port and a second UDP destination port.
In some examples, the flow composer component 302 can cause flow rules to be stored (e.g., embedded) in the control plane 308 based, at least in part, on the type of rule, a resource type associated with the rule, or combinations thereof. These rules can then be used to construct flow rules to be applied to a data plane 309 of the switching sub-system 307. A non-limiting example using the above list of flow rules follows.
In the following example, the control plane 308 can cause the flow rules 313-1, . . . , 313-N to be stored in a flow rule data structure such as flow rule data structure 211 illustrated in
A particular flow can be detected using source and/or destination MAC addresses. For example, a L2 level flow that can be detected using source and/or destination MAC addresses may, as indicated by rules #7 and #8, not be resent to the control plane 308 once the flow is detected.
A different flow, for example a L4 level flow, may demand construction of one or more new flows on a given application between, for example, two IP address end points of the network. For example, in the listing of flow rules above rule #9 may be constructed to allow detection of multiple SSH flows between a first IP address and a second IP address. However, rule #10 prohibits duplicating the flow once detected. In this manner, examples described herein can operate to prevent duplicate flow rules from being copied to the control plane 308.
The flow rules 313-1, . . . , 313-N can be generated (e.g., constructed) automatically, for example, by flow composer 302 based on policy settings and/or based on metadata contained within the packet 314. Examples are not so limited, however, and the flow rules 313-1, . . . , 313-N can be generated dynamically by flow composer 302 or in response to one or more inputs and/or commands via management methods in the control plane 308. In some examples, the precedence of the flow rules 313-1, . . . , 313-N can be based on the type of rule and/or resources that may be used during the rule construction process.
As shown in
In some examples, if a flow is terminated (e.g., aborted), the corresponding set of flow rules 313-1, . . . , 313-N can be deleted from the flow rule data structure. For example, the flow rules 313-1, . . . , 313-N that correspond to flows that have been terminated can be deleted from the switching sub-system 307. Flows may be terminated in response to events generated by a L2 level table, other system tables, and/or in response to an action conducted by the control plane 308 as a result of one or more inputs and/or commands via management methods, or as a result of normal control plane protocol processing, or as a result of electrical changes in the switch system such as loss of signal or link down indication from one or more physical ports on the switch sub-system 307.
An example of the control plane 308 taking an action to terminate (or suspend) a flow can be based on flow statistics and/or a change in a state of the resource to which the flow corresponds. For example, in the above listing of rules, the first MAC address being removed could correspond to removal of rules #3, #6, #7, and #8.
In some examples, certain flow rules can be embedded into the flow rule data structure. These rules can serve to enable tracking counters for given flows, as well as track or determine various attributes of the flow rules 313-1, . . . , 313-N. In some examples, the control plane 308 can embed (e.g., store) these rules depending on the type of rules and/or attributes of the flow corresponding to the flow rules. For example, a L2 level flow may include attributes for a number of bytes corresponding to transmission packets and/or received packets, jumbo frames transmitted and/or received, etc., while a L3 level flow may include attributes corresponding to received and/or transmitted packet counts, etc.
In some examples, flow rules 313-1, . . . , 313-N can be created and/or deleted. For example, flow rules 313-1, . . . , 313-N can be created and/or deleted in response to application of counter rules (e.g., counter rules 517 illustrated and described in connection with
In a non-limiting example, a current flow rule 313-1, . . . , 313-N (e.g., a flow rule that exists and is in use) can be defined such that packets 314 associated with a particular VLAN (e.g., a VLAN-100) are copied to the control plane 308 and packets 314 with a particular source address (e.g., a MAC source address aa:bb:cc:dd:ee:ff) are copied to the control plane 308. An associated counter rule (e.g., counter rule 517 illustrated in
If a packet 314 having a source MAC address of aa:bb:cc:ee:dd:ff and a destination MAC address of 11:22:33:44:55:66 is recieved, the packet 314 is copied of the control plane 308. In this example, because the packet 314 is copied to the control plane 308, a new rule may be generated to ensure that duplicate packets 314 are not copied to the control plane 308. For example, a new rule to not copy packets 314 with a source MAC address of aa:bb:cc:ee:dd:ff and destination MAC address 11:22:33:44:55:66 may be generated and added to the flow rules 313-1, . . . , 313-N.
In response to generation of the new flow rule 313-1, . . . , 313-N, a new counter rule may be generated. For example, a counter rule to increment the flow rule counter for packets 314 received and/or transmitted having a source MAC address of aa:bb:cc:dd:ee:ff and a destination MAC address of 11:22:33:44:55:66 may be generated and added to the counter rules.
The process of creating a new flow rule can include recognition of uni-directional, broadcast, and/or multicast packet 314 types. This may lead to a plurality of packets 314 being handled to create a new flow rule 313-1, . . . , 313-N. If a flow rule is created by the control plane 308, in some examples, user-defined tables and/or flow rule 313-1, . . . , 313-N entries may be created and/or stored in the data plane 309.
In some examples, flow rules 313-1, . . . , 313-N and/or flows can be deleted. When a flow rule 313-1, . . . , 313-N is deleted, corresponding flow rules 313-1, . . . , 313-N may also be deleted. In addition, as described in more detail in connection with
The flow rules 313-1, . . . , 313-N can be subjected to packet-processing filters in order to apply the flow rules 313-1, . . . , 313-N to the rules in a “single pass” (e.g., at once without iteration). Packet-processing filters that may be used to process the flow rules 313-1, . . . , 313-N and/or the flows can include exact match handling (described above), ingress content aware processor (iCAP), egress content aware processor (eCAP), and/or virtual content aware processor (vCAP) filters.
The VMs 410-1, . . . , 410-N can be provisioned with processing resources 404 and/or memory resources 406. The processing resources 404 and the memory resources 406 provisioned to the VMs 410-1, . . . , 410-N can be local and/or remote to the system 403. For example, in a software defined network, the VMs 410-1, . . . , 410-N can be provisioned with resources that are generally available to the software defined network and not tied to any particular hardware device. By way of example, the memory resources 406 can include volatile and/or non-volatile memory available to the VMs 410-1, . . . , 410-N. The VMs 410-1, . . . , 410-N can be moved to different hosts (not specifically illustrated), such that the VMs 410-1, . . . , 410-N are managed by different hypervisors.
In some examples, the flow composer component 402 can cause performance of actions based on flow rules (e.g., the flow rules 312-1, . . . , 312-N illustrated in
Based on the statistical analysis, the flow composer component 402 can be configured to re-allocate resources (e.g., processing resource 404 and/or memory resources 406) to different VMs. This can improve performance of the system 403 and/or optimize resource allocation among the VMs 410-1, . . . , 410-N. Information corresponding to the statistical analysis operation and/or information corresponding to the reallocation of the resources amongst the VMs can be stored (e.g., by the memory resource 406) and/or displayed to a network admin via, for example, a graphical user interface.
A plurality of switches 530-1, . . . , 530-N can be communicatively coupled to virtualization fabrics 531-1, . . . , 531-N. In some examples, the switches 530-1, . . . , 530-N can be top-of-rack switches. The virtualization fabrics 531-1, . . . , 531-N can be configured to provide movement of virtual machines (e.g., VMs 510-1, . . . , 510-N) between servers, such as blade servers, and/or virtual machines. A non-limiting example of a virtualization fabric 531-1, . . . , 531-N can be HEWLETT PACKARD VIRTUAL CONNECT®. In some examples, one or more of the virtualization fabrics (e.g., virtualization fabric 531-2 and virtualization fabric 531-N can be linked together such that they appear as a single logical unit.
The virtualization fabrics 531-1, . . . , 531-N can include respective control planes 508-1, . . . , 508-N and respective data planes 509-1, . . . , 509-N. As shown in
The data planes 509-1, . . . , 509-N can include flow rules 513-1, . . . , 513-N as described in connection with
The counter rules 517 can be installed during an initialization process and/or may be generated against policy (e.g., may be policy based) during runtime. In some examples, flows and/or flow rules 513-1, . . . , 513-N can be created and/or deleted based on the counter rules 517. For example, the counter rules 517 may track detected flows and may be used to determine if flows and/or flow rules 513-1, . . . , 513-N are to be created or deleted. If a flow or flow rule 513-1, . . . , 513-N is deleted in response to a counter rule 517, the corresponding counter rule 517 may be deleted as well.
The control planes 508-1, . . . , 508-N can include a flow composer component 502 and/or a flow rule prioritizer 512. The flow rule prioritizer 512 can be a queue, register, or logic that can re-arrange an order in which the flow rules 513-1, . . . , 513-N can be executed. In some examples, the flow rule prioritizer 512 can operate as a packet priority queue. In some examples, the flow rule prioritizer 512 can be configured to re-arrange application of the flow rules 513-1, . . . , 513-N in response to instructions received from the flow composer component 508.
The virtualization fabric 531-1 can be communicatively coupled to virtualized servers 532-1, . . . , 532-N and/or a bare metal server 533 via, for example, a management plane. In some examples, the management plane can configure, monitor, and/or manage layers of the network.
The bare metal server 533 can include processing resource(s) 504-3 and/or memory resources 506-3. The bare metal server 533 can be a physical server, such as a single-tenant physical server.
The virtualized servers 532-1, . . . , 532-N can include processing resource(s) 504-2/504-N and/or memory resources 506-2/506-N that can provision VMs 510-1, . . . , 510-N that are associated therewith. The VMs 510-1, . . . , 510-N can be analogous to the VMs 410-1, . . . , 410-N described above in connection with
At block 642, the method can include determining, based on application of flow rules, whether data corresponding to the respective flows is to be stored by a switching sub-system of the network. The switching sub-system can be analogous to the switching sub-system 307 illustrated in
At block 644, the method can include taking an action using the switching sub-system in response to the determination. The action can include copying (or not copying) the flow rules to a control plane of the switching sub-system, re-arranging application of the flow rules, deleting one or more flow rules, performing a statistical analysis operation using the flow rules, etc., as supported by the disclosure.
For example, the method can include determining that a first respective flow has a higher priority than a second respective flow and executing the action by processing the first respective flow prior to processing the second respective flow. In some examples, the second respective flow was, prior to the determination that the first respective flow has the higher priority, scheduled to be executed prior to the first respective flow. Stated alternatively, application of the flow rules can be dynamically altered or changed.
In some examples, the method can include incrementing a flow execution counter in response to executing the action. The flow execution counter can be used to track a quantity of times that a particular flow rule has been executed. For example, the flow execution counter can be incremented each time an action is taken by the switching sub-system in relation to a particular flow rule. This can allow for statistical analysis to be performed to determine which flow rules are executed more frequently than others, which flow rules involve particular network resources, etc.
In the foregoing detailed description of the disclosure, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration how examples of the disclosure may be practiced. These examples are described in sufficient detail to enable those of ordinary skill in the art to practice the examples of this disclosure, and it is to be understood that other examples may be utilized and that process, electrical, and/or structural changes may be made without departing from the scope of the disclosure. As used herein, designators such as “N”, etc., particularly with respect to reference numerals in the drawings, indicate that a number of the particular feature so designated can be included. A “plurality of” is intended to refer to more than one of such things. Multiple like elements may be referenced herein by their reference numeral without a specific identifier at the end.
The figures herein follow a numbering convention in which the first digit corresponds to the drawing figure number and the remaining digits identify an element or component in the drawing. For example, reference numeral 102 may refer to element “02” in