Embodiments of the present invention relate to networking equipment, in particular, hardware and software architecture and components for networking equipment for use in data centers and multi-site enterprise networks.
There are business needs to deploy independent yet interconnected data center fabrics, for example, for application fault domain isolation, separate change domains, disaster avoidance, and disaster recovery. In such applications, multi-site software-defined networks, such as application-centric infrastructure (ACI), may be configured with an inter-site controller or orchestrator that may operate in conjunction with individual intra-site software-network controllers to manage end-to-end policies.
The inter-site controller or orchestrator and intra-site controller may invoke any one of L4-L7 service insertions such as deep packet inspection (DPI), load balancing (LB), intrusion prevention system (IPS), malware protection, or firewall operations for inter-site traffic between such data centers or sites. A common type of inter-site traffic is the inter-site East-West Data Center traffic that is routed between data centers located on the West Coast US and the East Coast US. Service insertion generally refers to the adding of networking services, such as DPI, LB, firewalls, and IPS, among other services described herein, into the forwarding path of traffic. Service insertion can be performed in a chain in which the inserted services are linked in a prescribed manner, such as proceeding through a firewall, then an IPS, and finally malware protection before forwarding to the end-user.
With application-centric infrastructure (ACI), macro segmentation, and inter-site data center control, groups of host/endpoints may be defined to share similar policy characteristics (e.g., via security, performance, visibility policy or contract) within virtual machines, containers, or physical servers. These groups of host/endpoints may be referred to as EndPoint Groups (EPG).
The embodiments herein may be better understood by referring to the following description in conjunction with the accompanying drawings in which like reference numerals indicate identically or functionally similar elements, of which:
Overview
In an aspect, an embodiment of the present disclosure is directed a set of data centers and associated controls in which the data centers include network fabric comprising network routing devices (e.g., switches, routers) configured to route bi-directional traffic symmetrically through insertable service, e.g., via the associated inter-site and intra-site controls, for a given set of policies or contracts using an ASIC or circuit-assisted arithmetic logic, enforcing such policies at the local network devices, to deterministically select the insertable services.
In some embodiments, a network device (e.g., switch of a first network site or infrastructure) comprising a high-speed memory (e.g., TCAM); and a logic circuitry (e.g., ASIC, NPU, or other circuitries) operatively coupled to the high-speed memory, the logic circuitry being configured, via a pipeline operation comprising arithmetic or bitwise operator, to route bi-directional traffic symmetrically through inserted services among two or more network sites or infrastructure (e.g., data center), including a first network site or infrastructure and a second network site or infrastructure by: receiving, via the logic circuitry, a packet of the bi-directional traffic for an application executing between computing resources located at the first network site or infrastructure and the second network site or infrastructure; determining, via the arithmetic or bitwise operator, an output value derived from a routing data located within the packet (e.g., source address identifier and destination address identifier, e.g., IP address or MAC address); and routing the packet to at least one of a set of one or more insertable network services in accordance with a policy or contract based at least in part on the output value of the arithmetic or bitwise operator.
In some embodiments, the packet is received at a second network device of the second network site or infrastructure; the second network device is configured to (i) receive the packet via logic circuitries of the second network device, (ii) determine via an arithmetic or bitwise operator of the second network device an output value derived from the routing data located within the packet, and (iii) route the packet to at least one of a set of one or more insertable network services of the second network site or infrastructure in accordance with the policy or contract based at least in part on the output value of the arithmetic or bitwise operator of the second network device.
In some embodiments, the routing of the packet to at least one of the set of one or more insertable network services in accordance with the policy or contract employs (i) the output value of the arithmetic or bitwise operator and (ii) a flag or identifier associated with the destination address being a local network device of a network associated with the network device.
In some embodiments, the arithmetic or bitwise operator comprises an arithmetic comparator.
In some embodiments, the arithmetic or bitwise operator comprises an XOR bitwise comparator.
In some embodiments, the set of one or more insertable network services includes at least one of a deep packet inspection (DPI) service, a load balancing (LB) service, an intrusion prevention system (IPS) service, a malware protection service, and a firewall inspection service.
In some embodiments, the policy or contract includes at least one of a security policy, a performance policy, a quality-of-service (QOS) policy, a disaster recovery policy, and a visibility policy.
In some embodiments, the high-speed memory maintains a logic table that selects, via a single lookup action of high-speed memory, a network action (e.g., by a data plane of the network device) to route the packet to the set of one or more insertable network services in accordance with the policy or contract or to bypass the network action.
In some embodiments, the high-speed memory comprises tertiary content addressable memory (TCAM).
In some embodiments, the logic table includes a first rule to select the network action to route the packet to the set of one or more insertable network services in accordance with the policy or contract based on a first value (e.g., FALSE) for the arithmetic or bitwise operator and a masked value (e.g., don't care) for a flag or identifier associated with the destination address being a local network device of a network associated with the network device.
In some embodiments, the logic table includes a second rule to select the network action to route the packet to the set of one or more insertable network services in accordance with the policy or contract based on a masked value (e.g., don't care) for the arithmetic or bitwise operator and a second value (e.g., TRUE) for a flag or identifier associated with the destination address being a local network device of a network associated with the network device.
In some embodiments, the logic table includes a third rule to bypass the network action to route the packet to the set of one or more insertable network services in accordance with the policy or contract based on a second value (e.g., TRUE) for the arithmetic or bitwise operator and a first value (e.g., FALSE) for a flag or identifier associated with the destination address being a local network device of a network associated with the network device.
In some embodiments, the policy or contract is defined at an inter-site controller or an intra-site controller, the policy or contract being provided from the inter-site controller or the intra-site controller to configure the logic table.
In some embodiments, the network device further includes a processor; and a memory having instructions stored thereon, wherein execution of the instructions by the processor causes the processor to (i) receive the policy or contract (e.g., from the inter-site controller or the intra-site controller) and (ii) store routing action of the policy or contract to the high-speed memory.
In some embodiments, the packet is a multicast packet and is received at the second network device of the second network site or infrastructure and a third network device of the second network site or infrastructure.
In some embodiments, the packet is a multicast packet and is received at the second network device of the second network site or infrastructure and a third network device of a third network site or infrastructure.
In another aspect, a system (e.g., inter-site controller or the intra-site controller) is disclosed comprising a processor; and a memory having instructions stored thereon, wherein execution of the instructions by the processor causes the processor to: receive (or determine) a policy or contract for a set of one or more insertable network services to execute between two or more network sites or infrastructure (e.g., data center), including a first network site or infrastructure and a second network site or infrastructure; transmit the policy or contract to one or more first network devices of the first network site or infrastructure, including a first network device, and one or more second network devices of the second network site or infrastructure, including a second network device, wherein the first network device and the second network device are configured to symmetrically route bi-directional traffic among each other, wherein the first network device includes a high-speed memory (e.g., TCAM) and a logic circuitry (e.g., ASIC, NPU, or other circuitries) operatively coupled to the high-speed memory, the logic circuitry being configured, via a pipeline operation comprising arithmetic or bitwise operator, to symmetrically route bi-directional traffic with the second network device.
In some embodiments, the bi-directional traffic is received at a second network device of the second network site or infrastructure; the second network device is configured to (i) receive a packet of the bi-directional traffic via logic circuitries of the second network device, (ii) determine via an arithmetic or bitwise operator of the second network device an output value derived from the routing data located within the packet, and (iii) route the packet to at least one of a set of one or more insertable network services of the second network site or infrastructure in accordance with the policy or contract based at least in part on the output value of the arithmetic or bitwise operator of the second network device.
In some embodiments, the first network device is configured to route a packet to at least one of a set of one or more insertable network services in accordance with the policy or contract using (i) an output value determined from the arithmetic or bitwise operator and (ii) a flag or identifier associated with the destination address being a local network device of a network associated with the first network device.
In some embodiments, the arithmetic or bitwise operator comprises an arithmetic comparator or an XOR bitwise comparator.
In the example shown in
The terms “policy” and “contract” (e.g., 112) are used interchangeably herein and refer to a collection of network control rules that includes the controls for a set of one or more service insertions (or a set thereof comprising service chaining for multiple linked service insertions) in which network services are insertable in a software-defined network operation (e.g., software-defined WAN or LAN) that can provide such network services as a central location to a set of computing or network resources. The policy or contract can be employed for any type of application-centric infrastructure (ACI), macro segmentation, and multi-site data center control to share similar any type of policy characteristics (e.g., via a security, performance, visibility policy or contract). In some embodiments, the policy can be arbitrarily defined or can be an aggregate of multiple policies (e.g., security and inspection, etc.). The software-defined network operation beneficially aggregates the network service to a single location or hub that would otherwise have to be individually deployed for all applicable network appliances or computing resources, thus improving the execution and maintenance of such services.
The term “insertable services” (also referred to as “inserted services”) (e.g., 114) refers to L4, L5, L6, or L7 insertable services such deep packet inspection (DPI), load balancing (LB), intrusion prevention system (IPS), intrusion detection system (IDS), malware protection, firewall operations such as web application firewall (WAF), WAN optimization, WAN accelerators, encryption and decryption, secure socket layer (SSL) offload, among other. In some embodiments, the insertable services (e.g., 114) may be deployed additionally using specialized controllers/network appliances, e.g., via an application policy infrastructure controller (APIC) (not shown) and/or an application delivery controller (APC) (not shown).
In the example shown in
The arithmetic operator or bitwise operator ensures that the routing to the inserted network service(s) is deterministically selected and consistently applied in an inter-network-wide manner by local hardware to provide symmetrical bi-directional traffic through inserted network services for a given policy or contract. Examples of arithmetic logic (e.g., 126) include arithmetic comparators (such as less than or greater than) or bitwise operators such as exclusive-OR (XOR), e.g., for a set of predefined digits. In some embodiments, the operation is performed on the entire tag value (parameter value or classification identifier). In other embodiments, the operation is performed on a portion of the tag value (parameter value or classification identifier), e.g., on a pre-defined length of the MSB or the LSB of the such values.
In other embodiments, the operator may be based on a hash operator.
In some embodiments, the arithmetic operator or bitwise operator may also operate in conjunction with a second condition, e.g., if the received packet is received a network node and the destination is local to that node, i.e., within the same intra-site or data center. That is, a host connected to a network node is identified as determination-is-local when the network node (e.g., ToR/switch) directly receives packets from the host. The arithmetic operator or bitwise operator and second condition may be invoked once which may be tracked by an applied-once flag or identifier provided in the packet.
In the example shown in
The symmetric bi-directional traffic is routed through an external network 138 such as an internet service provider (ISP) network, telecom carrier, network provider, or a combination thereof.
Example Methods of Operation to Macro-Segment using Hardware Assisted Operation.
As used herein, the term “hardware-assisted” refers to data-plane resources or logic circuitries that are performed in a processing pipeline or a hardware-accelerated circuit or memory, e.g., via tertiary CAMs (TCAMs) that are employed for packet routing in switch gear equipment or appliance. In some embodiments, the hardware-assisted operation is performed on a packet basis, i.e., at 100 Gbits/second or greater, to correspond to packet routing speed. In some embodiments, the operation is performed for a per-flow basis. In some embodiments, the operation is performed on a multi-cast packet.
In an aspect, Method 200a is configured to route bi-directional traffic symmetrically through insertable services between two network sites or infrastructure (e.g., data centers) in accordance with an illustrative embodiment.
Method 200a includes receiving (202), at a first network site or infrastructure, a packet of bi-directional traffic for an application (e.g., 116) executing between computing resources (e.g., 110) (e.g., hosts) located at the first network site or infrastructure (e.g., 102) and a second network site or infrastructure (e.g., 102).
Method 200a then includes determining (204), via ASIC or logic circuitries executing an arithmetic operator (e.g., of a network device in the first network site), an output value (e.g., 126) of the arithmetic operator from a parameter (e.g., tag) (e.g., 120, 122) associated with a policy or contract (e.g., 112) having an associated set of one or more insertable network services (e.g., 114).
Method 200a then includes routing (206) the packet to at least one of the sets of one or more insertable network services (e.g., 114) based on the output value (e.g., 126) of the arithmetic operator.
In another aspect, Method 200b is configured to route bi-directional traffic symmetrically through insertable services between two or more network sites or infrastructures (e.g., data centers) in accordance with another illustrative embodiment.
Method 200b includes receiving (202), at a first network site or infrastructure, a packet of bi-directional traffic for an application (e.g., 116) executing between computing resources (e.g., 110) located at the first network site or infrastructure (e.g., 102) and a second network site or infrastructure (e.g., 102).
Method 200b then includes determining (204′), via ASIC or logic circuitries executing an arithmetic operator (e.g., of a network device in the first network site), an output value (e.g., 126) of the arithmetic operator from a parameter (e.g., tag) (e.g., 120, 122) associated with a policy or contract (e.g., 112) having an associated set of one or more insertable network services (e.g., 114). The ASIC or logic circuitries may additionally determine a packet destination lookup to determine if the destination device is located within the first network site or infrastructure.
Method 200b then includes routing (208) the packet to at least one of the sets of one or more insertable network services based on (i) a first parameter derived from the output value of the operator and (ii) a second parameter associated with a destination device of the packet being within the first network site or infrastructure. In some embodiments, the first parameters and second parameters are based on a tag (e.g., policy or contract tag) assigned to the application (e.g., 116) executing between the computer resources (e.g., 110) located on the different network sites or infrastructures (e.g., 102). The operator may be an arithmetic operator, e.g., an arithmetic comparator, an XOR operator, or other pipeline-able hardware or logic circuit described herein.
It should be appreciated that Methods 200a and 200b may be performed on a second or third network site or infrastructure, and etc., for a second or third set of insertable network services, respectively, executing thereat for the given application (e.g., 116) to provide the symmetric bi-directional traffic routing through corresponding insertable services between two or more network sites or infrastructures.
Method 200b then includes determining (212), via ASIC or logic circuitries executing an arithmetic operator (e.g., of a network device in the second network site), an output value (e.g., 126) of the arithmetic operator from a parameter (e.g., tag) (e.g., 120, 122) associated with the policy or contract (e.g., 112) having an associated set of one or more insertable network services (e.g., 114).
Method 200c then includes routing (206) the packet to at least one of the set of one or more insertable network services (e.g., 114) of the second network site based on the output value (e.g., 126) of the arithmetic operator.
Example Method of Operation for Inter-Site or Intra-Site Controller to Configure Bi-Directional Symmetric Traffic Between Inserted Services
Method 200d includes receiving (216) (or generating), at a controller (e.g., an intra-site controller (e.g., 106) or inter-site controller (e.g., 104)), a first policy or contract (e.g., 112) having an associated set of insertable network services (e.g., 114) to be orchestrated for an application (e.g., 116) executing between two or more computing resources (e.g., 110) located at a first network site or infrastructure (e.g., 102) and a second network site or infrastructure (e.g., 102), respectively
Method 200d includes determining (218) a set of routing tables or rules (e.g., 130) for symmetric bi-directional traffic routing for the associated set of insertable network services (e.g., 114) executing at the first network site or infrastructure (e.g., 102) and at the second network site or infrastructure (e.g., 102).
Method 200d includes programming (220) respective ASIC or logic circuitries of (i) a first network device (e.g., 108) associated with the first network site or infrastructure (e.g., 102) and (ii) a second network device (e.g., 108) associated with the second network site or infrastructure (e.g., 102), with the set of routing tables or rules (e.g., 130) in which the first and second network devices (e.g., 108) are configured to route received packets associated with the application (e.g., 116) to at least one of the associated set of insertable network services (e.g., 114) based on the programmed ASIC or logic circuitries.
In some embodiments, the programmed ASIC or logic circuitries can execute the methods (e.g., 200a, 200b, and/or 200c) as described in relation to
In some embodiments, the programmed ASIC or logic circuitries can execute a bitwise operator or an arithmetic operator in a hardware-assisted pipeline or ASIC pipeline to route the symmetric bi-directional traffic through the desired inserted network services.
Example Multi-Site Service Insertion Architecture with ACI
With application-centric infrastructure (ACI) technology, and the like, a group of computing resources, e.g., EndPoint Groups (EPG), can be created as a foundational construct on which policies (e.g., contracts), e.g., for security, performance, quality-of-service (QoS) metrics or tags, disaster recovery, among others described herein can be deployed. To this end, an EPG can include and represent (i.e., by an EPG number) a group of host/endpoints behind virtual machines, containers, or physical servers that each share similar policy characteristics (e.g., security characteristics).
In an example,
An ACI enterprise solution may include 100s or 1000s of such backend operations (shown as “PROD-APP100” 310) and “PROD-DB100” 316) that can be split among multiple data centers. Manual configurations, e.g., by network operators or administrators, to link individual inserted network services by a contract or policy can be difficult and cumbersome to manage at such a scale.
For example, in
In another example, for ACI operations, a single any-to-any contract (or Macro-Segment) having service insertion with flexible application port selection may be configured, e.g., to filter traffic redirected to the inserted services. The inserted services may be spread across the different data centers or remote sites, e.g., to provide for improved scalability, availability, locality, or a combination thereof.
In the example shown in
The ASIC or circuit-assisted operator logic and associated arithmetic or bitwise operation provide a unique data path to deterministically select an inserted network service for multiple directions of traffic flow. The implementation is straightforward and without great complexity, making its deployment also straightforward for network operators and administrators. The exemplary hardware-assisted operation may be implemented for any multi-site data center or enterprise fabric which may employ SG-ACL for a policy or contract (e.g., security) and service insertion, e.g., to improve scalability via automation.
While role- or SG-based ACL as used for macro-segment-based distributed service insertion may result in a scenario having a tie between the selection of inserted services, the ASIC or circuit-assisted operator logic, and associated arithmetic or bitwise operation may be employed in such configurations to tie break such selection in a deterministic manner. That is, the exemplary system may be applied to any fabric, e.g., where SG-ACLs or role-based ACLs are employed to convey the policy intent in a flexible, automatable way and at scale in which, in these fabrics, ingress or egress network devices (e.g., ingress or egress ACL devices) are typically or previously applied at each interface in a non-scalable manner. Tie-breaking may be technically challenging, e.g., in the macro-segment- (or VRF-) based direct or redirect because the routing decisions that may govern the inter-site forwarding may be independent of the policy operations, and the bi-directional flow should pick exactly one firewall in the path.
Random-access memory (RAM) generally operates by returning the content of memory cells at a specified address. Content addressable memory (CAM) returns the address or location for a content of interest, e.g., a binary key. While binary CAM can match only on the binary zeroes and ones, tertiary content address memory (TCAM) employs a mask that can also match a third state: any value or the don't care. Here, the TCAM lookup operation facilitates a lookup of two operations for a given policy or contract, namely, route the packet to the defined inserted network service (shown as “R11” 326, “R21” 326′, “R12” 328, and “R22” 328′) or bypass/ignore this routing operation (shown as “R10” 324 and “R20” 324′).
Forward direction Traffic Handling:
Reverse direction Traffic Handling: For the return traffic,
In the example of
In the example of a disaster recovery operation is initiated or occurred (514) at database application “Prod-DB100” 508 executing at the second site 504, and an IP address move is forwarded to the disaster recovery system “DR-DB100” 510. In some embodiments, the IP address of the database application “Prod-DB100” 508 is moved, e.g., via a VMotion operation, to the disaster recovery system “DR-DB100” 510. That is, the virtual machines (VMs) in the EndPoint Groups (EPG) of the database application “Prod-DB100” 508 is now moved (e.g., via VMotion) to the disaster recovery system “DR-DB100” 510 of the first site 502.
Because the IP address of the database application “Prod-DB100” 508 is moved to the disaster recovery system “DR-DB100” 510, the ASIC or logic-assisted operation would generate the same source and destination pctags for the disaster recovery system “DR-DB100” 510. Thus, the same arithmetic or bitwise operator (e.g., S<D) to determine the routing to an inserted network service for a given policy or contract would be the same for the disaster recovery system “DR-DB100” 510.
Post-Migration—forward direction. As an example, a packet is sent from the production application “Prod-App100” 506 of the first site 502 to the database application, now, the disaster recovery system “DR-DB100” 510, also of the first site 502. The ASIC or logic-assisted circuit of a first node within the production application “Prod-App100” 506 determines the source and destination pctags (“S=1000” and “D=2000”), e.g., derived or determined based on an identifier of the source or destination device (e.g., IP address of the source or destination devices). The ASIC or logic-assisted circuit of the first node within the production application “Prod-App100” 506 then performs an arithmetic operation on the tag values for the source and destination (shown as “S<D,” which is TRUE) and evaluates the destination address of the forward direction traffic to determine if the destination to a destination node linked to a second node of the disaster recovery system “DR-DB100” 510 is a local network device of the first site 502 (which is TRUE). The values of the source tag, the destination tag, the output of the arithmetic operation (520), and the destination-is-local flag (522) are used, via a TCAM lookup operation, to match to rule 514 that directs the data plane of the first node within the production application “Prod-App100” 506 to forward the packet to the second node of the disaster recovery system “DR-DB100” 510 through the inserted network service “FW1” 516 (e.g., to the inserted network service “FW1” 516 at the next hop with a destination address of a computing resource located at the second node of the disaster recovery system “DR-DB100” 510).
The packet is received at the second node of the disaster recovery system “DR-DB100” 510 at local site 502 from the production application “Prod-App100” 506 of the first site 502. The ASIC or logic-assisted circuit of a second node within the disaster recovery system “DR-DB100” 510 determines an inserted service has already been applied at the local site 502 (e.g., based on previously-applied “PA” flag 515) and bypasses the evaluation for additional inserted service operations.
Same Post-Migration—Reverse direction. For the reverse traffic direction, a second packet is sent from the database application, now, disaster recovery system “DR-DB100” 510 of the first site 502 to the production application “Prod-App100” 506 also of the first site 502. The ASIC or logic-assisted circuit of a second node within the disaster recovery system “DR-DB100” 510 determines the source and destination pctags (“S=2000” and “D=1000”), e.g., derived or determined based on an identifier of the source or destination device (e.g., IP address of the source or destination devices). The ASIC or logic-assisted circuit of the second node within the disaster recovery system “DR-DB100” 510 then performs an arithmetic operation on the tag values for the source and destination (shown as “S<D,” which is FALSE) and evaluates the destination address of the forward direction traffic to determine if the destination is a local network device of the first site 502 (which is TRUE). The values of the source tag, the destination tag, the output of the arithmetic operation (520), and the destination-is-local flag (522) are used, via a TCAM lookup operation, to match to rule 514 that directs the data plane of the second node within the disaster recovery system “DR-DB100” 510 to forward the second packet to a third node of the production application “Prod-App100” 506 through the inserted network service “FW1” 516 (e.g., to the inserted network service “FW1” 516 at a next hop with a destination address of a computing resource located at the third node of the production application “Prod-App100” 506).
Upon receipt of the packet at the production application “Prod-App100” 506 of the first site 502, the ASIC or logic-assisted circuit of a first node within the production application “Prod-App100” 506 determines an inserted service has already been applied (e.g., via flag 515) at the local site 502 and bypasses the evaluation for additional inserted service operations.
Pre-Migration—forward direction. Still in this example, prior to disaster recovery migration, a packet is sent from the production application “Prod-App100” 506 of the first site 502 to the database application of the database application “PROD-DB100” 508 of the second site 504. This has a similar network configuration as shown in
The packet is received at the second node of the database application “PROD-DB100” 508 at local site 504 from the production application “Prod-App100” 506 of the first site 502. The ASIC or logic-assisted circuit of the second node within the database application “PROD-DB100” 508 determines the source and destination pctags (“S=1000” and “D=2000”), e.g., derived or determined based on an identifier of the source or destination device (e.g., IP address of the source or destination devices). The ASIC or logic-assisted circuit of the second node within the database application “PROD-DB100” 508 then performs an arithmetic operation on the tag values for the source and destination (shown as “S<D,” which is TRUE) and evaluates the destination address of the forward direction traffic to determine if the destination to a destination node linked to the second node of the database application “PROD-DB100” 508 is a local network device of the second site 504 (which is TRUE). The values of the source tag, the destination tag, the output of the arithmetic operation (520), and the destination-is-local flag (522) are used, via a TCAM lookup operation, to match to rule 514 that directs the data plane of the second node of the database application “PROD-DB100” 508 to the inserted network service “FW1” 516 (e.g., to the inserted network service “FW2” 528 at a next hop with a destination address of a computing resource located at the second node of the database application “PROD-DB100” 508).
Same Pre-Migration—reverse direction. For the reverse traffic direction, a second packet is sent from the database application “PROD-DB100” 508 of the second site 504 back to the production application “Prod-App100” 506 of the first site 502. The ASIC or logic-assisted circuit of a second node within the database application “PROD-DB100” 508 determines the source and destination pctags (“S=2000” and “D=1000”), e.g., derived or determined based on an identifier of the source or destination device (e.g., IP address of the source or destination devices). The ASIC or logic-assisted circuit of the second node within the database application “PROD-DB100” 508 then performs an arithmetic operation on the tag values for the source and destination (shown as “S<D,” which is FALSE) and evaluates the destination address of the forward direction traffic to determine if the destination is a local network device (which is FALSE). The values of the source tag, the destination tag, the output of the arithmetic operation (530), and the destination-is-local flag (532) are used, via a TCAM lookup operation, to match to rule 532 that directs the data plane of the second node within the database application “PROD-DB100” 508 to forward the second packet to the production application “Prod-App100” 506 through the inserted network service “FW2” 528 executing at the second site 504.
Upon receipt of the packet at the production application “Prod-App100” 506 of the first site 502, the ASIC or logic-assisted circuit of a first node within the production application “Prod-App100” 506 determines an inserted service has already been applied (e.g., via flag 515) at the local site 502 and bypasses the evaluation for additional inserted service operations.
Similar modeling of traffic flow may be applied to other inter-site network operations.
It should be understood that the various techniques and modules described herein, including the control-plane-data-plane interface transport module, may be implemented in connection with hardware components or software components or, where appropriate, with a combination of both. Illustrative types of hardware components that can be used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc. The methods and apparatus of the presently disclosed subject matter, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium where, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the presently disclosed subject matter.
Embodiments of the network device may be implemented, in whole or in part, in virtualized network hardware in addition to physical hardware.
Although exemplary implementations may refer to utilizing aspects of the presently disclosed subject matter in the context of one or more stand-alone computer systems, the subject matter is not so limited but rather may be implemented in connection with any computing environment, such as a network or distributed computing environment. Still further, aspects of the presently disclosed subject matter may be implemented in or across a plurality of processing chips or devices, and storage may similarly be effected across a plurality of devices. While various embodiments of the present disclosure have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the present disclosure. Thus, the breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments but should be defined only in accordance with the following claims and their equivalents.