MAC address synchronization in a fabric switch

Information

  • Patent Grant
  • 9413691
  • Patent Number
    9,413,691
  • Date Filed
    Monday, January 13, 2014
    10 years ago
  • Date Issued
    Tuesday, August 9, 2016
    8 years ago
Abstract
One embodiment of the present invention provides a system for facilitating synchronization of MAC addresses in a fabric switch. During operation, the system divides a number of media access control (MAC) addresses associated with devices coupled to an interface of the switch. The system then computes a checksum for a respective chunk of MAC addresses. In addition, the system broadcasts MAC address information of the chunk to facilitate MAC address synchronization in a fabric switch of which the switch is a member, and to manage the chunks and their corresponding checksum, thereby correcting an unsynchronized or race condition in the fabric switch.
Description
BACKGROUND

1. Field


The present disclosure relates to network management. More specifically, the present disclosure relates to a method and system for distributed management of layer-2 address table entries.


2. Related Art


The growth of the Internet has brought with it an increasing demand for bandwidth. As a result, equipment vendors race to build larger and faster networks with large number of switches, each capable of supporting a large number of end devices, to move more traffic efficiently. However, managing the forwarding entries associated with these end devices becomes complex when the forwarding information is distributed across the switches. Particularly, due to distributed forwarding table updates, it is essential to maintain consistency across a network.


Meanwhile, layer-2 (e.g., Ethernet) switching technologies continue to evolve. More routing-like functionalities, which have traditionally been the characteristics of layer-3 (e.g., Internet Protocol or IP) networks, are migrating into layer-2. Notably, the recent development of the Transparent Interconnection of Lots of Links (TRILL) protocol allows Ethernet switches to function more like routing devices. TRILL overcomes the inherent inefficiency of the conventional spanning tree protocol, which forces layer-2 switches to be coupled in a logical spanning-tree topology to avoid looping. TRILL allows routing bridges (RBridges) to be coupled in an arbitrary topology without the risk of looping by implementing routing functions in switches and including a hop count in the TRILL header.


While TRILL brings many desirable features to layer-2 networks, some issues remain unsolved when a distributed yet consistent mechanism to clear entries from a layer-2 address table is desired.


SUMMARY

One embodiment of the present invention provides a system for facilitating synchronization of MAC addresses in a fabric switch. During operation, the system divides a number of media access control (MAC) addresses associated with devices coupled to an interface of the switch into a number of chunks. The system then computes a checksum for a respective chunk of MAC addresses. In addition, the system broadcasts MAC address information of the chunk to facilitate MAC address synchronization in a fabric switch of which the switch is a member, and to manage the chunks and their corresponding checksum, thereby correcting an unsynchronized or race condition in the fabric switch.


In a variation on this embodiment, managing the chunks and their corresponding checksum involves refraining from sending an updated checksum of a respective chunk after at least one MAC address within that chunk has been updated, if an update to the corresponding chunk has been received from another switch.


In a variation on this embodiment, the system sends a checksum of a respective chunk to other switches in the fabric switch after a guard timer has expired.


In a variation on this embodiment, the system sends content of a chunk to a remote switch in response to a message indicating an unsynchronized condition associated with a chunk from the remote switch, if an updated to the chunk has not be received by the local switch within a past predetermined time window.


In a variation on this embodiment, the system refrains from comparing a new checksum received for a chunk from an owner switch of the chunk, if an update to the chunk has been received from another switch other than the owner switch within a past predetermined time window.


In a variation on this embodiment, the checksum for a respective chunk is related to the content of that chunk but not related to the order of the MAC addresses in that chunk.


In a further variation on this embodiment, each chunk of MAC addresses associated with the interface includes MAC addresses with the same last n bits, wherein n is a predetermined number.


In a further variation on this embodiment, any chunking method that is dependent on the content only can be used





BRIEF DESCRIPTION OF THE FIGURES


FIG. 1 illustrates an exemplary TRILL network with distributed forwarding information, in accordance with an embodiment of the present invention.



FIG. 2A illustrates an exemplary network where a virtual RBridge identifier is assigned to two physical TRILL RBridges which are coupled to end devices via virtually aggregated links, in accordance with an embodiment of the present invention.



FIG. 2B illustrates an exemplary ownership bitmap for a layer-2 forwarding table entry, wherein the forwarding table entry corresponds to an end device associated with a virtual RBridge, in accordance with an embodiment of the present invention.



FIG. 2C illustrates an exemplary scenario where MAC address update messages can reach a node out of order.



FIG. 3 presents a flow chart illustrating the process of an owner RBridge sending out a MAC update, in accordance with one embodiment of the present invention.



FIG. 4 presents a flow chart illustrating the process of a receiver RBridge receiving a chunk checksum update, in accordance with one embodiment of the present invention.



FIG. 5 illustrates an exemplary architecture of a switch with distributed forwarding table update capability, in accordance with an embodiment of the present invention.





DETAILED DESCRIPTION

The following description is presented to enable any person skilled in the art to make and use the invention, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims.


In embodiments of the present invention, the problem of consistently layer-2 (L2) media access control (MAC) address tables in a fabric switch is solved by dividing the MAC addresses maintained at a respective member switch into a number of chunks, generating a signature (such as a checksum) for each chunk, and comparing these signatures in the process of updating the MAC address tables.


In general, consider a fabric switch, which includes a number of physical member switches and functions as a single, logical switch (for example, as one logical L2 switch), such as Brocade's virtual cluster switch (VCS). Each member switch (which in the case of VCS can be a transparent interconnection of lots of links (TRILL) routing bridge (RBridge) maintains a MAC address database for L2 forwarding. Ideally, this MAC address database is the same at every member switch for the fabric switch to function properly. In order for the MAC address database to be consistent across all member switches in the fabric, typically, a software based MAC distribution method is used to distribute MAC addresses learned at a particular member switch to all other member switches in the fabric. However, under different circumstances the MAC address database at various nodes could go out of synchronization (such as when an edge device is moved from one member switch to another), which can result in erroneous forwarding. The method and system disclosed herein facilitate detection and correction of such potential erroneous forwarding.


Traditionally for bridged networks there is no notion of identical bridge tables at all bridges in the network. This is because bridges learn MAC addresses when traffic passes through them. In a bridged network where MAC databases are updated from node-to-node through software (as opposed to the natural learning behavior) is with multi-chassis trunking (MCT), also called virtual link aggregation (VLAG), where two or more RBridges form a virtual RBridge to facilitate link aggregation. In such scenarios the MAC synchronization problem is considerably simpler because typical vendors do not support MCT of more than two physical switches, and there is no need to synchronize outside the MCT for the MACs learned on the MCT. Consequently, the types of race conditions are much more limited. L2 switch stacking solutions also work on the notion of a stack master which is responsible for disseminating the MAC information across the stack member switches hence the synchronization problems can be avoided. However, in a fabric switch, because MAC addresses learned at different member switches need to be distributed throughout the fabric, and because edge devices are free to move from one member switch to another, race conditions often occur.


Note that in this disclosure, a member switch of a fabric switch is referred to as an RBridge, although embodiments of the present invention are not limited to TRILL implementations.


The MAC database as a whole has many owners for its various parts, because the MAC addresses can be learned at different member switches. Specifically, definition of an RBridge owning a MAC is that the MAC is behind an edge L2 interface of that RBridge. All physical L2 interface MACs are owned by the corresponding associated RBridge. For a MAC address behind a vLAG one could argue that all the member RBridges of that vLAG own the MAC. In this disclosure, the RBridge that actually sends out software update message for the MAC address are considered its owners. It is possible that multiple members of the vLAG could send out a MAC address database update (this can happen if the MAC address is seen for the first time simultaneously by multiple members of the vLAG). Therefore, the concept of ownership of a MAC behind a vLAG is manifested by a set bit of the bitmap representing the member RBridges of the vLAG. The degenerate case of this example is when the bitmap is empty—this can happen when all the original nodes of the vLAG have left the vLAG. The current vLAG primary is considered the owner of such MAC addresses.


Regardless of which switch is the owner of a given MAC, the owner synchronizes the MAC with all other members of the fabric. Hence in a steady state it is expected that all the nodes have a common view of the entire MAC database. Due to different race conditions it is possible that this view is disrupted.



FIG. 1 illustrates an exemplary TRILL network with distributed MAC forwarding information, in accordance with an embodiment of the present invention. As illustrated in FIG. 1, a TRILL network 100 includes RBridges 101, 102, 103, 104, and 105. End devices 112 and 114 are coupled to RBridge 101 and end devices 116 and 118 are coupled to RBridge 105. RBridges in network 100 use edge ports to communicate to end devices and TRILL ports to communicate to other RBridges. For example, RBridge 101 is coupled to end devices 112 and 114 via edge ports and to RBridges 102, 103, and 105 via TRILL ports.


In some embodiments, TRILL network 100 may be an Ethernet fabric switch. In some further embodiments, the Ethernet fabric switch may be a virtual cluster switch. In an exemplary Ethernet fabric switch, any number of RBridges in any arbitrary topology may logically operate as a single switch. Any new RBridge may join or leave the Ethernet fabric switch in “plug-and-play” mode without any manual configuration.


During operation, in FIG. 1, RBridge 101 dynamically learns the MAC addresses of end devices 112 and 114 when the devices send frames through RBridge 101 and stores them in a local forwarding table. In some embodiments, RBridge 101 distributes the learned MAC addresses to all other RBridges in network 100. Similarly, RBridge 105 learns the MAC addresses of end devices 116 and 118, and distributes the information to all other RBridges.


In a virtual link aggregation, multiple RBridges can learn the MAC address of an end device and may become the owner of all forwarding entries associated with the MAC address. The ownership association and the two-tier clear command can maintain consistency in forwarding tables for such multiple ownership entries as well. FIG. 2A illustrates an exemplary network where a virtual RBridge identifier is assigned to two physical TRILL RBridges which are coupled to end devices via virtually aggregated links, in accordance with an embodiment of the present invention. As illustrated in FIG. 2A, a TRILL network 200 includes RBridges 201, 202, 203, 204, and 205. RBridge 205 is coupled to an end device 232. End devices 222 and 224 are both dual-homed and coupled to RBridges 201 and 202. The goal is to allow a dual-homed end station to use both physical links to two separate TRILL RBridges as a single, logical aggregate link, with the same MAC address. Such a configuration would achieve true redundancy and facilitate fast protection switching.


RBridges 201 and 202 are configured to operate in a special “trunked” mode for end devices 222 and 224. End devices 222 and 224 view RBridges 201 and 202 as a common virtual RBridge 210, with a corresponding virtual RBridge identifier. Dual-homed end devices 222 and 224 are considered to be logically coupled to virtual RBridge 210 via logical links represented by dotted lines. Virtual RBridge 210 is considered to be logically coupled to both RBridges 201 and 202, optionally with zero-cost links (also represented by dotted lines). RBridges which participate in link aggregation and form a virtual RBridge are referred to as “partner RBridges.”


When end device 222 sends a packet to end device 232 via ingress RBridge 201, RBridge 201 learns the MAC address of end device 222 and distributes the learned MAC address to all other RBridges in network 200. All other RBridges update their respective forwarding tables with an entry corresponding to end device 222 and assign RBridge 201 as the owner of the entry. As end device 222 is coupled to RBridge 202, end device 222 may send a packet to end device 232 via RBridge 202 as well. Consequently, RBridge 202 learns the MAC address of end device 222 and distributes the learned MAC address to all other RBridges in network 200. All other RBridges then add RBridge 202 as an owner of the entry associated with end device 222 as well.



FIG. 2B illustrates an exemplary ownership bitmap for a layer-2 forwarding table entry, wherein the forwarding table entry corresponds to an end device associated with a virtual RBridge, in accordance with an embodiment of the present invention. The two most significant bits of ownership bitmap 250 in FIG. 2B are associated with RBridges 201 and 202, respectively. In this example, only the two most significant bits of bitmap 250 are set. Hence, bitmap 250 represents an ownership by RBridges 201 and 202, and can be used to indicate the ownership of the MAC addresses of end devices 222 and 224 in respective forwarding tables in all RBridges in network 200.


In FIG. 2A, during operation, a first command to clear dynamically learned MAC addresses from forwarding tables is issued from RBridge 203. Upon receiving the first clear command, RBridge 201 issues a second clear command to terminate ownership of entries owned by RBridge 201. When other RBridges receive this second command, they remove the ownership associations between RBridge 201 and the MAC addresses of end devices 222 and 224. In some embodiments, the ownership association is removed by clearing the bit corresponding to RBridge 201 in an ownership bitmap. However, as forwarding entries associated with end devices 222 and 224 are also owned by RBridge 202, other RBridges do not remove these entries from local forwarding tables. Upon receiving the first clear command, RBridge 202 also issues the second clear command to terminate ownership of entries owned by RBridge 202. When other RBridges receive this second command from RBridge 202, the ownership associations between RBridge 201 and the MAC addresses of end devices 222 and 224 are terminated. As no other RBridge owns the entries for the MAC addresses of end devices 222 and 224, they are removed from the respective forwarding tables in all RBridges in network 200.


To address MAC synchronization problems, embodiments of the present invention divides all the MAC addresses learned at a respective interface on an RBridge into a number of content dependent chunks, and generate a checksum for each chunk so that these checksums can be compared to determine whether a potential race condition has occurred. Also, the chunking scheme allows the detection and repair schemes to scale at various extremes (such as a large number of MACs on a single port and/or single VLAN). In general, the chunking scheme produces a reasonable chunk size that is neither too large nor too small. In addition, the chunks are content dependent, but are independent from the orders of the MAC addresses within.


In one embodiment, the following chunking scheme is used on each Rbridge in the fabric: First, an RBridge, referred to as R1, chooses a local L2 interface, referred to as I1. Note that vLAGs of which R1 is part of are also considered to be a local L2 interface for R1. Next, consider the set of MACs, referred to as (S), learned on I1 which are owned by R1. In the case of a vLAG, the vLAG primary switch can further partition this set of MACs into two sets, one for the MACs that the primary switch really owns (e.g., physically coupled MACs), and one for the MACs whose corresponding bit associated with the primary switch is not set and R1 just happens to “own” the MAC because it is the current vLAG primary switch.


The mechanism to divide (S) into chunks is content dependant. Consider each of the sets above. Take, for example, the least significant n bits (say n=4) of each MAC and based on these n bits divide (S) into 2^n (which in this case is 16) disjoint sets (chunks). On average, this scheme would result in a chunk size that is 1/16 the size of (S). Note that each member switch of the fabric can independently identify a respective chunk by the 3-tuple<RBridge-id, interface id, value-of-last-4-bits-of-MAC>. This chunk identifier can be sent along with the chunk checksum as described below.


In order to compare the consistency of MAC address maintained at different switches, embodiments of the present invention allows a respective switch to exchange chunk signatures (e.g., checksums). Comparing chunk signatures facilitates detection of MAC address record discrepancy. In general, the checksum of a chunk should exhibit the following properties:

    • (1) The chances of two different MAC address sets producing the same checksum should be very low.
    • (2) The checksum should be an unordered set, as opposed to an ordered set. In other words, the computation of a checksum should be commutative.
    • (3) Ideally, the checksum size is significantly smaller than the size of the chunk.
    • (4) The computation load of calculating the checksum should be reasonably low.
    • (5) It is preferable that incremental checksums can be calculated as MAC addresses are added or removed from the chunk. For example, if C is the checksum of S, and a MAC address {M} is added to S, the system preferably computes the new checksum of {S}∪{M} incrementally from C Likewise, for deletion of a MAC address from S, the system also calculates the new checksum incrementally, without having to re-compute the checksum for the entire set. This property makes the checksum computation a “pay-as-you-go” scheme and saves on computational resources.


Various checksum computation algorithms can be used. In one embodiment, the system uses a modulo prime multiplication and inverse method. With this method the system can attain the commutative property for the checksum and perform incremental computation. A prime that can be used here is 2^31−1 (a Mersenne prime). The system can perform the computation on each of the corresponding bytes of the MACs. Specifically, let M1=M10:M11:M12:M13:M14:M15 and M2=M20:M21:M22:M23:M24:M25. The system then computes C1=f(M10, M20), C2=f(M11, M21) . . . C6=f(M15, M25). In addition, the system breaks up the VLANID in a byte-wise manner and computes two additional checksums. The system then stores each of the byte-wise checksum result in 32-bit precision. When a MAC address is added to the set, the system updates the byte-wise checksum using the added MAC address. When a MAC address is deleted from the set, the system updates the checksum using the byte-wise multiplicative inverse of the deleted MAC (which can be more than one byte). In one embodiment, the system can pre-compute and store the multiplicative inverses of 0-255 modulo the prime. The size of this checksum is 4*8=32 bytes. This scheme requires 8*4=32 modulo multiplications per MAC entry that is added to or deleted from a chunk. To get additional uncorrelated hashes, the system can add a seed to each of the bytes, e.g., C1′=f(M10+3, M20+3), etc. If the system uses 4 such seeds, it can obtain 32*4=128 bytes of total checksum.


Another commutative operation with inverse could be addition (again byte-by-byte of the MAC). During an addition of a MAC to the chunk the system does a byte-wise addition to update the checksum and during deletion of the MAC from the set it does a byte-wise subtraction to update the checksum. In order to strengthen the checksum to reduce likelihood of collision, the system could also do sum of squares, cubes, fourths, etc. of the bytes of the MAC (since byte-wise add checksum match for 2 sets of MACs does not automatically mean the byte-wise sum of squares would match and so on). All these higher powers can be pre-computed and stored (255 values). This provides a good compromise over the multiply scheme. For the addition the system can calculate all the results up to 32 bit precision—that way the sum and sum of squares would not hit the 2^32 limit based on the average size of each set through chunking. This checksum essentially would have the same size as modulo-prime multiplication except that it is computationally cheaper.



FIG. 2C illustrates an exemplary scenario where MAC address update messages can reach a node out of order. In this example, let S be a chunk of MACs owned by RBridge R1 that it is trying to make sure the content of S is consistent across the fabric switch. The goal is to allow nodes in the fabric switch to compare the checksum of a respective chunk to determine discrepancies. This approach is valid as long as there has not been any change to S caused by nodes other than R1. As long as all changes to S are from R1 then comparing checksums is meaningful. Otherwise a checksum mismatch can result from a timing (race) situation. In the example in FIG. 2, assume that MAC M originally belongs to {S} at R1. Assume that first a new MAC M2 is added to {S}. As a result, R1 sends out a MAC address update, together with the checksum for {S} and the identifier of {S}. Shortly thereafter, the device associated with MAC M moves from R1 to R3. In response, R3 broadcasts a MAC address update to both R1 and R2. Assume that R3's update reaches R2 before R1's update does. Consequently, R2 considers these two conflicting update messages for {S}.


One embodiment of the present invention adopts the following procedure to correct the above race condition:

    • The owner (R1) of a set of MACs {S} would send out a new checksum of {S} together with its MAC address update if there has been no change to S by another RBridge (R3 in this example) in the last t (e.g., t=3) seconds.
    • The receiver of a checksum does not perform a checksum comparison of {S} if there has been a change to S by another RBridge in the last t seconds.
    • If an out-of-synch message reaches R1 within t seconds of a change to {S} by another RBridge, R1 would discard this out-of-synch message.
    • In the case of a vLAG, if any of the set of MAC addresses with their associated bit in the ownership bitmap empty has changed in the last t seconds, the primary switch of the vLAG will not send a checksum for that set. Likewise, the receiver of the checksum will not compare the received checksum if its associated ownership bitmap has changed in the last t seconds.



FIG. 3 presents a flow chart illustrating the process of an owner RBridge sending out a MAC update, in accordance with one embodiment of the present invention. During operation, the owner of a chunk of MAC addresses {S} checks whether any MAC address in {S} has been updated (operation 302). If so, the owner computes a new checksum for {S} (operation 304). Subsequently, the system determines whether there has been any change to {S} received from another RBridge (operation 306). If there has been no such change in the last t seconds, the owner sends out a MAC address update for {S} to other nodes with the checksum of {S} (operation 312).


If there has been at least one change to one or more MACs in {S} in the last t seconds, the system sends out the MAC update for {S} to other nodes without the checksum (operation 308). Subsequently, the system waits for a guard timer to expire (operation 310). In one embodiment, this guard timer can be randomized and is statically set to be approximately 30 seconds. After the guard timer is expired, the system then broadcasts the checksum for {S} (operation 312).


When there is no update received at the owner node (i.e., the “NO” branch at operation 302), the system by default waits for the guard timer to expire (operation 310). Each time the guard timer expires, the system broadcasts the checksum for {S} (operation 312).



FIG. 4 presents a flow chart illustrating the process of a receiver RBridge receiving a chunk checksum update, in accordance with one embodiment of the present invention. During operation, a receiver RBridge receives the checksum for a chunk {S} from the owner of {S} (operation 402). The receiving node then determines whether it has received change to {S} from another node in the last t seconds (operation 404). If so, the receiving node does nothing. Otherwise, the receiving node compares its old checksum with the new received checksum (operation 406). The receiving node then determines if the old checksum is different from the new checksum (operation 408). If the two are the same, the system does nothing. If they are different, the receiving node sends a request to the owner, together with the discrepancy (operation 410). Subsequently, if the owner sends a response (e.g., the entire content of {S}) to the receiving node, the receiving node repairs its records for {S} (operation 412).


To repair the inconsistent records of {S} at a remote node, once the owner of a set of MACs receives an out-of-sync message from the remote node, it could trigger a few directed queries to reconfirm that it is not a transient/race condition—the rules of when to send the checksum would still follow the false positive reduction heuristics as mentioned above. Alternatively, it could track the number of consecutive out-of-syncs of a chunk from a remote node. If this number exceeds a threshold number the owner would then unicast the contents of that chunk to that remote node.


The receiver of a chunk of MACs {S} would have to apply the difference from its version of S, referred to as S′. In order that there is no unnecessary data path effects, this can be done by: adding entries {S-S′}; deleting {S′-S}; and for entries {S∩S′} performing no operation.


Exemplary Switch System



FIG. 5 illustrates an exemplary architecture of a switch with distributed forwarding table update capability, in accordance with an embodiment of the present invention. In this example, an RBridge 500 includes a number of edge ports 502 and TRILL ports 504, a TRILL management module 520, an ownership module 530, an Ethernet frame processor 510, and a storage 550. TRILL management module 520 further includes a TRILL header processing module 522 and a notification module 526.


TRILL ports 504 include inter-switch communication channels for communication with one or more RBridges. These inter-switch communication channels can be implemented via a regular communication port and based on any open or proprietary format. Furthermore, the inter-switch communication between RBridges is not required to be direct port-to-port communication.


During operation, edge ports 502 receive frames from (and transmit frames to) end devices. Ethernet frame processor 510 extracts and processes header information from the received frames. From the extracted header, RBridge 500 learns the MAC addresses of end devices. Ownership module 530 creates an ownership association between the RBridge and the learned MAC addresses. Notification module 526 creates notification messages about the ownership association. TRILL header processing module 522 encapsulates the notification messages in TRILL packets and forwards the notification to all other RBridges.


In some embodiments, RBridge 500 may participate in a virtual link aggregation and form a virtual RBridge, wherein TRILL management module 520 further includes a virtual RBridge configuration module 524, and ownership module 530 further includes an age-out control module 536 and a MAC address management module 537. TRILL header processing module 522 generates the TRILL header and outer Ethernet header for ingress frames corresponding to the virtual RBridge. Virtual RBridge configuration module 524 manages the communication with RBridges associated with the virtual RBridge and handles various inter-switch communications, such as link and node failure notifications. Virtual RBridge configuration module 524 allows a user to configure and assign the identifier for the virtual RBridges. Furthermore, age-out control module 536 handles aging out of forwarding entries associated with dynamically learned MAC addresses from the virtual link aggregation.


MAC address management module 537 can include a chunking module, a checksum module, and a MAC synchronization module. The chunking module is responsible for dividing MAC addresses into chunks. The checksum module is responsible for computing checksums for the chunks. The MAC synchronization module is responsible for performing the MAC synchronization methods described above.


In some embodiments, RBridge 500 is in an Ethernet fabric switch, and may include a virtual switch management module 540 and a logical switch 542. Virtual switch management module 540 maintains a configuration database in storage 550 that maintains the configuration state of every switch within the fabric switch. Virtual switch management module 540 also maintains the state of logical switch 542, which is used to join other fabric switches. In some embodiments, logical switch 542 can be configured to operate in conjunction with Ethernet frame processor 510 as a logical Ethernet switch.


Note that the above-mentioned modules can be implemented in hardware as well as in software. In one embodiment, these modules can be embodied in computer-executable instructions stored in a memory which is coupled to one or more processors in RBridge 500. When executed, these instructions cause the processor(s) to perform the aforementioned functions.


In summary, embodiments of the present invention provide a switch, a method and a system for distributed management of layer-2 address table entries. In one embodiment, the switch includes an ownership management mechanism and a notification mechanism. The ownership management mechanism maintains a local ownership association between the switch and a medium access control (MAC) address learned at the switch, and terminates the local ownership association for the MAC address. The notification mechanism generates a first notification specifying the local ownership association and a second notification specifying the termination of the local ownership association.


The methods and processes described herein can be embodied as code and/or data, which can be stored in a computer-readable non-transitory storage medium. When a computer system reads and executes the code and/or data stored on the computer-readable non-transitory storage medium, the computer system performs the methods and processes embodied as data structures and code and stored within the medium.


The methods and processes described herein can be executed by and/or included in hardware modules or apparatus. These modules or apparatus may include, but are not limited to, an application-specific integrated circuit (ASIC) chip, a field-programmable gate array (FPGA), a dedicated or shared processor that executes a particular software module or a piece of code at a particular time, and/or other programmable-logic devices now known or later developed. When the hardware modules or apparatus are activated, they perform the methods and processes included within them.


The foregoing descriptions of embodiments of the present invention have been presented only for purposes of illustration and description. They are not intended to be exhaustive or to limit this disclosure. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. The scope of the present invention is defined by the appended claims.

Claims
  • 1. A switch, comprising: chunking circuitry configured to divide a number of media access control (MAC) addresses associated with devices coupled to an interface of the switch into a number of chunks, wherein a chunk includes a plurality of MAC addresses;checksum circuitry configured to compute a checksum for a respective chunk of MAC addresses; andMAC synchronization circuitry configured to: construct a broadcast message comprising MAC address information of one or more chunks to facilitate MAC address synchronization in a network of interconnected switches, wherein the switch is a member of the network of interconnected switches;manage the chunks and their corresponding checksums, thereby correcting an unsynchronized or race condition in the network of interconnected switches.
  • 2. The switch of claim 1, wherein while managing the chunks and their corresponding checksums, the MAC synchronization circuitry is configured to refrain from constructing a notification message comprising an updated checksum of a chunk, which includes at least one MAC address that has been updated, in response to detecting an update to the corresponding chunk received from another switch.
  • 3. The switch of claim 1, wherein the MAC synchronization circuitry is further configured to construct a notification message comprising a checksum of a respective chunk in response to detecting that a guard timer has been expired, wherein the notification message is destined to other switches in the network of interconnected switches.
  • 4. The switch of claim 1, wherein the MAC synchronization circuitry is further configured to construct a notification message comprising content of a chunk in response to detecting an unsynchronized condition associated with the chunk received from a remote switch, wherein the notification message is destined to the remote switch, and wherein an update to the chunk has not been detected by the switch within a past predetermined time window.
  • 5. The switch of claim 1, wherein the MAC synchronization circuitry is further configured to refrain from comparing a new checksum for a chunk received from an owner switch of the chunk, in response to detecting an update to the chunk received from another switch other than the owner switch within a past predetermined time window.
  • 6. The switch of claim 1, wherein the checksum for a respective chunk is related to the content of that chunk but not related to the order of the MAC addresses in that chunk.
  • 7. The switch of claim 1, wherein each chunk of MAC addresses associated with the interface includes MAC addresses with a same last n bits, wherein n is a predetermined number.
  • 8. A method, comprising: dividing a number of media access control (MAC) addresses associated with devices coupled to an interface of the switch, wherein a chunk includes a plurality of MAC addresses;computing a checksum for a respective chunk of MAC addresses;constructing a broadcast message comprising MAC address information of one or more chunks to facilitate MAC address synchronization in a network of interconnected switches, wherein the switch is a member of the network of interconnected switches; andmanaging the chunks and their corresponding checksums, thereby correcting an unsynchronized or race condition in the network of interconnected switches.
  • 9. The method of claim 8, wherein managing the chunks and their corresponding checksums comprises refraining from constructing a notification message comprising an updated checksum of a chunk, which includes at least one MAC address that has been updated, in response to detecting an update to the corresponding chunk received from another switch.
  • 10. The method of claim 8, further comprising constructing a notification message comprising a checksum of a respective chunk in response to detecting that a guard timer has been expired, wherein the notification message is destined to other switches in the network of interconnected switches.
  • 11. The method of claim 8, further comprising constructing a notification message comprising content of a chunk in response to detecting an unsynchronized condition associated with the chunk received from a remote switch, wherein the notification message is destined to the remote switch, and wherein an update to the chunk has not been detected by the switch within a past predetermined time window.
  • 12. The method of claim 8, further comprising refraining from comparing a new checksum for a chunk received from an owner switch of the chunk, in response to detecting an update to the chunk received from another switch other than the owner switch within a past predetermined time window.
  • 13. The method of claim 8, wherein the checksum for a respective chunk is related to the content of that chunk but not related to the order of the MAC addresses in that chunk.
  • 14. The switch of claim 8, wherein each chunk of MAC addresses associated with the interface includes MAC addresses with a same last n bits, wherein n is a predetermined number.
  • 15. A computing system for a switch, comprising: a processor; anda storage device storing instructions which when executed by the processor cause the processor to perform a method, the method comprising:dividing a number of media access control (MAC) addresses associated with devices coupled to an interface of the switch, wherein a chunk includes a plurality of MAC addresses;computing a checksum for a respective chunk of MAC addresses;constructing a broadcast message comprising MAC address information of one or more chunks to facilitate MAC address synchronization in a network of interconnected switches, wherein the switch is a member of the network of interconnected switches; andmanaging the chunks and their corresponding checksums, thereby correcting an unsynchronized or race condition in the network of interconnected switches.
  • 16. The computing system of claim 15, wherein managing the chunks and their corresponding checksums comprises refraining from constructing a notification message comprising an updated checksum of a chunk, which includes at least one MAC address that has been updated, in response to detecting an update to the corresponding chunk received from another switch.
  • 17. The computing system of claim 15, wherein the method further comprises constructing a notification message comprising a checksum of a respective chunk in response to detecting that a guard timer has expired, wherein the notification message is destined to other switches in the network of interconnected switches.
  • 18. The computing system of claim 15, wherein the method further comprises constructing a notification message comprising content of a chunk to a remote switch in response to detecting an unsynchronized condition associated with the chunk received from a remote switch, wherein the notification message is destined to the remote switch, and wherein an update to the chunk has not been detected by the switch within a past predetermined time window.
  • 19. The computing system of claim 15, wherein the method further comprises refraining from comparing a new checksum received for a chunk from an owner switch of the chunk in response to detecting an update to the chunk received from another switch other than the owner switch within a past predetermined time window.
  • 20. The computing system of claim 15, wherein the checksum for a respective chunk is related to the content of that chunk but not related to the order of the MAC addresses in that chunk.
  • 21. The computing system of claim 15, wherein each chunk of MAC addresses associated with the interface includes MAC addresses with a same last n bits, wherein n is a predetermined number.
RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Application No. 61/751,803, titled “MAC ADDRESS SYNCHRONIZATION IN A FABRIC SWITCH,” by inventor Vardarajan Venkatesh, filed 11 Jan. 2013, which is incorporated by reference herein. The present disclosure is related to U.S. patent application Ser. No. 13/087,239, titled “Virtual Cluster Switching,” by inventors Suresh Vobbilisetty and Dilip Chatwani, filed 14 Apr. 2011; U.S. patent application Ser. No. 12/725,249, titled “Redundant Host Connection in a Routed Network,” by inventors Somesh Gupta, Anoop Ghanwani, Phanidhar Koganti, and Shunjia Yu, filed 16 Mar. 2010; and U.S. patent application Ser. No. 13/365,808, titled “CLEARING FORWARDING ENTRIES DYNAMICALLY AND ENSURING CONSISTENCY OF TABLES ACROSS ETHERNET FABRIC SWITCH,” by inventors Mythilikanth Raman, Mary Manohar, Wei-Chivan Chen, Gangadhar Vegesana, Vardarajan Venkatesh, and Raju Shekarappa, filed 3 Feb. 2012, the disclosures of which are incorporated by reference herein.

US Referenced Citations (415)
Number Name Date Kind
829529 Keathley Aug 1906 A
5390173 Spinney Feb 1995 A
5802278 Isfeld Sep 1998 A
5878232 Marimuthu Mar 1999 A
5959968 Chin Sep 1999 A
5973278 Wehrill, III Oct 1999 A
5983278 Chong Nov 1999 A
6041042 Bussiere Mar 2000 A
6085238 Yuasa Jul 2000 A
6104696 Kadambi Aug 2000 A
6185214 Schwartz Feb 2001 B1
6185241 Sun Feb 2001 B1
6331983 Haggerty Dec 2001 B1
6438106 Pillar Aug 2002 B1
6498781 Bass Dec 2002 B1
6542266 Philips Apr 2003 B1
6633761 Singhal Oct 2003 B1
6771610 Seaman Aug 2004 B1
6873602 Ambe Mar 2005 B1
6937576 DiBenedetto Aug 2005 B1
6956824 Mark Oct 2005 B2
6957269 Williams Oct 2005 B2
6975581 Medina Dec 2005 B1
6975864 Singhal Dec 2005 B2
7016352 Chow Mar 2006 B1
7061877 Gummalla Jun 2006 B1
7173934 Lapuh Feb 2007 B2
7197308 Singhal Mar 2007 B2
7206288 Cometto Apr 2007 B2
7310664 Merchant Dec 2007 B1
7313637 Tanaka Dec 2007 B2
7315545 Chowdhury et al. Jan 2008 B1
7316031 Griffith Jan 2008 B2
7330897 Baldwin Feb 2008 B2
7380025 Riggins May 2008 B1
7397794 Lacroute Jul 2008 B1
7430164 Bare Sep 2008 B2
7453888 Zabihi Nov 2008 B2
7477894 Sinha Jan 2009 B1
7480258 Shuen Jan 2009 B1
7508757 Ge Mar 2009 B2
7558195 Kuo Jul 2009 B1
7558273 Grosser, Jr. Jul 2009 B1
7571447 Ally Aug 2009 B2
7599901 Mital Oct 2009 B2
7688736 Walsh Mar 2010 B1
7688960 Aubuchon Mar 2010 B1
7690040 Frattura Mar 2010 B2
7706255 Kondrat et al. Apr 2010 B1
7716370 Devarapalli May 2010 B1
7720076 Dobbins May 2010 B2
7729296 Choudhary Jun 2010 B1
7787480 Mehta Aug 2010 B1
7792920 Istvan Sep 2010 B2
7796593 Ghosh Sep 2010 B1
7808992 Homchaudhuri Oct 2010 B2
7836332 Hara Nov 2010 B2
7843906 Chidambaram Nov 2010 B1
7843907 Abou-Emara Nov 2010 B1
7860097 Lovett Dec 2010 B1
7898959 Arad Mar 2011 B1
7912091 Krishnan Mar 2011 B1
7924837 Shabtay Apr 2011 B1
7937756 Kay May 2011 B2
7945941 Sinha May 2011 B2
7949638 Goodson May 2011 B1
7957386 Aggarwal Jun 2011 B1
8018938 Fromm Sep 2011 B1
8027354 Portolani Sep 2011 B1
8054832 Shukla Nov 2011 B1
8068442 Kompella Nov 2011 B1
8078704 Lee Dec 2011 B2
8102781 Smith Jan 2012 B2
8102791 Tang Jan 2012 B2
8116307 Thesayi Feb 2012 B1
8125928 Mehta Feb 2012 B2
8134922 Elangovan Mar 2012 B2
8155150 Chung Apr 2012 B1
8160063 Maltz Apr 2012 B2
8160080 Arad Apr 2012 B1
8170038 Belanger May 2012 B2
8175107 Yalagandula May 2012 B1
8194674 Pagel Jun 2012 B1
8195774 Lambeth Jun 2012 B2
8204061 Sane Jun 2012 B1
8213313 Doiron Jul 2012 B1
8213336 Smith Jul 2012 B2
8230069 Korupolu Jul 2012 B2
8239960 Frattura Aug 2012 B2
8249069 Raman Aug 2012 B2
8270401 Barnes Sep 2012 B1
8295291 Ramanathan Oct 2012 B1
8295921 Wang Oct 2012 B2
8301686 Appajodu Oct 2012 B1
8339994 Gnanasekaran Dec 2012 B2
8351352 Eastlake, III Jan 2013 B1
8369335 Jha Feb 2013 B2
8369347 Xiong Feb 2013 B2
8392496 Linden Mar 2013 B2
8462774 Page Jun 2013 B2
8467375 Blair Jun 2013 B2
8520595 Yadav Aug 2013 B2
8599850 Jha et al. Dec 2013 B2
8599864 Chung Dec 2013 B2
8615008 Natarajan Dec 2013 B2
8706905 McGlaughlin Apr 2014 B1
8724456 Hong May 2014 B1
8806031 Kondur Aug 2014 B1
8826385 Congdon Sep 2014 B2
8918631 Kumar Dec 2014 B1
8937865 Kumar Jan 2015 B1
8995272 Agarwal Mar 2015 B2
20010005527 Vaeth Jun 2001 A1
20010055274 Hegge Dec 2001 A1
20020019904 Katz Feb 2002 A1
20020021701 Lavian Feb 2002 A1
20020039350 Wang Apr 2002 A1
20020054593 Morohashi May 2002 A1
20020091795 Yip Jul 2002 A1
20030041085 Sato Feb 2003 A1
20030123393 Feuerstraeter Jul 2003 A1
20030147385 Montalvo Aug 2003 A1
20030174706 Shankar Sep 2003 A1
20030189905 Lee Oct 2003 A1
20030208616 Laing Nov 2003 A1
20030216143 Roese Nov 2003 A1
20040001433 Gram Jan 2004 A1
20040003094 See Jan 2004 A1
20040010600 Baldwin Jan 2004 A1
20040049699 Griffith Mar 2004 A1
20040057430 Paavolainen Mar 2004 A1
20040081171 Finn Apr 2004 A1
20040117508 Shimizu Jun 2004 A1
20040120326 Yoon Jun 2004 A1
20040156313 Hofmeister et al. Aug 2004 A1
20040165595 Holmgren Aug 2004 A1
20040165596 Garcia Aug 2004 A1
20040205234 Barrack Oct 2004 A1
20040213232 Regan Oct 2004 A1
20050007951 Lapuh Jan 2005 A1
20050044199 Shiga Feb 2005 A1
20050074001 Mattes et al. Apr 2005 A1
20050094568 Judd May 2005 A1
20050094630 Valdevit May 2005 A1
20050122979 Gross Jun 2005 A1
20050157645 Rabie et al. Jul 2005 A1
20050157751 Rabie Jul 2005 A1
20050169188 Cometto Aug 2005 A1
20050195813 Ambe Sep 2005 A1
20050207423 Herbst Sep 2005 A1
20050213561 Yao Sep 2005 A1
20050220096 Friskney Oct 2005 A1
20050265356 Kawarai Dec 2005 A1
20050278565 Frattura Dec 2005 A1
20060007869 Hirota Jan 2006 A1
20060018302 Ivaldi Jan 2006 A1
20060023707 Makishima et al. Feb 2006 A1
20060029055 Perera Feb 2006 A1
20060034292 Wakayama Feb 2006 A1
20060036765 Weyman Feb 2006 A1
20060059163 Frattura Mar 2006 A1
20060062187 Rune Mar 2006 A1
20060072550 Davis Apr 2006 A1
20060083254 Ge Apr 2006 A1
20060098589 Kreeger May 2006 A1
20060140130 Kalkunte Jun 2006 A1
20060168109 Warmenhoven Jul 2006 A1
20060184937 Abels Aug 2006 A1
20060221960 Borgione Oct 2006 A1
20060227776 Chandrasekaran Oct 2006 A1
20060235995 Bhatia Oct 2006 A1
20060242311 Mai Oct 2006 A1
20060245439 Sajassi Nov 2006 A1
20060251067 DeSanti Nov 2006 A1
20060256767 Suzuki Nov 2006 A1
20060265515 Shiga Nov 2006 A1
20060285499 Tzeng Dec 2006 A1
20060291388 Amdahl Dec 2006 A1
20060291480 Cho Dec 2006 A1
20070036178 Hares Feb 2007 A1
20070053294 Ho Mar 2007 A1
20070083625 Chamdani Apr 2007 A1
20070086362 Kato Apr 2007 A1
20070094464 Sharma Apr 2007 A1
20070097968 Du May 2007 A1
20070098006 Parry May 2007 A1
20070116224 Burke May 2007 A1
20070116422 Reynolds May 2007 A1
20070156659 Lim Jul 2007 A1
20070177525 Wijnands Aug 2007 A1
20070177597 Ju Aug 2007 A1
20070183313 Narayanan Aug 2007 A1
20070211712 Fitch Sep 2007 A1
20070258449 Bennett Nov 2007 A1
20070274234 Kubota Nov 2007 A1
20070289017 Copeland et al. Dec 2007 A1
20080052487 Akahane Feb 2008 A1
20080056135 Lee Mar 2008 A1
20080065760 Damm Mar 2008 A1
20080080517 Roy Apr 2008 A1
20080095160 Yadav Apr 2008 A1
20080101386 Gray May 2008 A1
20080112400 Dunbar et al. May 2008 A1
20080133760 Berkvens et al. Jun 2008 A1
20080159277 Vobbilisetty Jul 2008 A1
20080172492 Raghunath Jul 2008 A1
20080181196 Regan Jul 2008 A1
20080181243 Vobbilisetty Jul 2008 A1
20080186981 Seto Aug 2008 A1
20080205377 Chao Aug 2008 A1
20080219172 Mohan Sep 2008 A1
20080225852 Raszuk Sep 2008 A1
20080225853 Melman Sep 2008 A1
20080228897 Ko Sep 2008 A1
20080240129 Elmeleegy Oct 2008 A1
20080267179 LaVigne Oct 2008 A1
20080285458 Lysne Nov 2008 A1
20080285555 Ogasahara Nov 2008 A1
20080298248 Roeck Dec 2008 A1
20080304498 Jorgensen Dec 2008 A1
20080310342 Kruys Dec 2008 A1
20090022069 Khan Jan 2009 A1
20090037607 Farinacci Feb 2009 A1
20090042270 Dolly Feb 2009 A1
20090044270 Shelly Feb 2009 A1
20090067422 Poppe Mar 2009 A1
20090067442 Killian Mar 2009 A1
20090079560 Fries Mar 2009 A1
20090080345 Gray Mar 2009 A1
20090083445 Ganga Mar 2009 A1
20090092042 Yuhara Apr 2009 A1
20090092043 Lapuh Apr 2009 A1
20090106405 Mazarick Apr 2009 A1
20090116381 Kanda May 2009 A1
20090129384 Regan May 2009 A1
20090138577 Casado May 2009 A1
20090138752 Graham May 2009 A1
20090161584 Guan Jun 2009 A1
20090161670 Shepherd Jun 2009 A1
20090168647 Holness Jul 2009 A1
20090199177 Edwards Aug 2009 A1
20090204965 Tanaka Aug 2009 A1
20090213783 Moreton Aug 2009 A1
20090222879 Kostal Sep 2009 A1
20090232031 Vasseur Sep 2009 A1
20090245137 Hares Oct 2009 A1
20090245242 Carlson Oct 2009 A1
20090246137 Hadida Oct 2009 A1
20090252049 Ludwig Oct 2009 A1
20090252061 Small Oct 2009 A1
20090260083 Szeto Oct 2009 A1
20090279558 Davis Nov 2009 A1
20090292858 Lambeth Nov 2009 A1
20090316721 Kanda Dec 2009 A1
20090323698 LeFaucheur Dec 2009 A1
20090323708 Ihle Dec 2009 A1
20090327392 Tripathi Dec 2009 A1
20090327462 Adams Dec 2009 A1
20090328392 Tripathi Dec 2009
20100027420 Smith Feb 2010 A1
20100046471 Hattori Feb 2010 A1
20100054260 Pandey Mar 2010 A1
20100061269 Banerjee Mar 2010 A1
20100074175 Banks Mar 2010 A1
20100097941 Carlson Apr 2010 A1
20100103813 Allan Apr 2010 A1
20100103939 Carlson Apr 2010 A1
20100131636 Suri May 2010 A1
20100158024 Sajassi Jun 2010 A1
20100165877 Shukla Jul 2010 A1
20100165995 Mehta Jul 2010 A1
20100168467 Johnston Jul 2010 A1
20100169467 Shukla Jul 2010 A1
20100169948 Budko Jul 2010 A1
20100182920 Matsuoka Jul 2010 A1
20100195489 Zhou Aug 2010 A1
20100215042 Sato Aug 2010 A1
20100215049 Raza Aug 2010 A1
20100220724 Rabie Sep 2010 A1
20100226368 Mack-Crane Sep 2010 A1
20100226381 Mehta Sep 2010 A1
20100246388 Gupta Sep 2010 A1
20100257263 Casado Oct 2010 A1
20100265849 Harel Oct 2010 A1
20100271960 Krygowski Oct 2010 A1
20100272107 Papp Oct 2010 A1
20100281106 Ashwood-Smith Nov 2010 A1
20100284414 Agarwal Nov 2010 A1
20100284418 Gray Nov 2010 A1
20100287262 Elzur Nov 2010 A1
20100287548 Zhou Nov 2010 A1
20100290473 Enduri Nov 2010 A1
20100299527 Arunan Nov 2010 A1
20100303071 Kotalwar Dec 2010 A1
20100303075 Tripathi Dec 2010 A1
20100303083 Belanger Dec 2010 A1
20100309820 Rajagopalan Dec 2010 A1
20100309912 Mehta Dec 2010 A1
20100329110 Rose Dec 2010 A1
20110019678 Mehta Jan 2011 A1
20110032945 Mullooly Feb 2011 A1
20110035489 McDaniel Feb 2011 A1
20110035498 Shah Feb 2011 A1
20110044339 Kotalwar Feb 2011 A1
20110044352 Chaitou Feb 2011 A1
20110064086 Xiong Mar 2011 A1
20110064089 Hidaka Mar 2011 A1
20110072208 Gulati Mar 2011 A1
20110085560 Chawla Apr 2011 A1
20110085563 Kotha Apr 2011 A1
20110110266 Li May 2011 A1
20110134802 Rajagopalan Jun 2011 A1
20110134803 Dalvi Jun 2011 A1
20110134925 Safrai Jun 2011 A1
20110142053 Van Der Merwe Jun 2011 A1
20110142062 Wang Jun 2011 A1
20110161494 Mcdysan Jun 2011 A1
20110161695 Okita Jun 2011 A1
20110176412 Stine Jul 2011 A1
20110188373 Saito Aug 2011 A1
20110194403 Sajassi Aug 2011 A1
20110194563 Shen Aug 2011 A1
20110228780 Ashwood-Smith Sep 2011 A1
20110231570 Altekar Sep 2011 A1
20110231574 Saunderson Sep 2011 A1
20110235523 Jha Sep 2011 A1
20110243133 Villait Oct 2011 A9
20110243136 Raman Oct 2011 A1
20110246669 Kanada Oct 2011 A1
20110255538 Srinivasan Oct 2011 A1
20110255540 Mizrahi Oct 2011 A1
20110261828 Smith Oct 2011 A1
20110268120 Vobbilisetty Nov 2011 A1
20110268125 Vobbilisetty Nov 2011 A1
20110273988 Tourrilhes Nov 2011 A1
20110274114 Dhar Nov 2011 A1
20110280572 Vobbilisetty Nov 2011 A1
20110286457 En Nov 2011 A1
20110296052 Guo Dec 2011 A1
20110299391 Vobbilisetty Dec 2011 A1
20110299413 Chatwani Dec 2011 A1
20110299414 Yu Dec 2011 A1
20110299527 Yu Dec 2011 A1
20110299528 Yu Dec 2011 A1
20110299531 Yu Dec 2011 A1
20110299532 Yu Dec 2011 A1
20110299533 Yu Dec 2011 A1
20110299534 Koganti Dec 2011 A1
20110299535 Vobbilisetty Dec 2011 A1
20110299536 Cheng Dec 2011 A1
20110317559 Kern Dec 2011 A1
20110317703 Dunbar et al. Dec 2011 A1
20120011240 Hara Jan 2012 A1
20120014261 Salam Jan 2012 A1
20120014387 Dunbar Jan 2012 A1
20120020220 Sugita Jan 2012 A1
20120027017 Rai Feb 2012 A1
20120033663 Guichard Feb 2012 A1
20120033665 Jacob Da Silva Feb 2012 A1
20120033668 Humphries Feb 2012 A1
20120033669 Mohandas Feb 2012 A1
20120033672 Page Feb 2012 A1
20120063363 Li Mar 2012 A1
20120075991 Sugita Mar 2012 A1
20120099567 Hart Apr 2012 A1
20120099602 Nagapudi Apr 2012 A1
20120106339 Mishra May 2012 A1
20120117438 Shaffer May 2012 A1
20120131097 Baykal May 2012 A1
20120131289 Taguchi May 2012 A1
20120134266 Roitshtein May 2012 A1
20120147740 Nakash Jun 2012 A1
20120158997 Hsu Jun 2012 A1
20120163164 Terry Jun 2012 A1
20120177039 Berman Jul 2012 A1
20120210416 Mihelich Aug 2012 A1
20120243539 Keesara Sep 2012 A1
20120275297 Subramanian Nov 2012 A1
20120275347 Banerjee Nov 2012 A1
20120278804 Narayanasamy Nov 2012 A1
20120294192 Masood Nov 2012 A1
20120294194 Balasubramanian Nov 2012 A1
20120320800 Kamble Dec 2012 A1
20120320926 Kamath et al. Dec 2012 A1
20120327766 Tsai et al. Dec 2012 A1
20120327937 Melman et al. Dec 2012 A1
20130003535 Sarwar Jan 2013 A1
20130003737 Sinicrope Jan 2013 A1
20130003738 Koganti Jan 2013 A1
20130028072 Addanki Jan 2013 A1
20130034015 Jaiswal Feb 2013 A1
20130034021 Jaiswal Feb 2013 A1
20130067466 Combs Mar 2013 A1
20130070762 Adams Mar 2013 A1
20130114595 Mack-Crane et al. May 2013 A1
20130124707 Ananthapadmanabha May 2013 A1
20130127848 Joshi May 2013 A1
20130136123 Ge May 2013 A1
20130148546 Eisenhauer Jun 2013 A1
20130194914 Agarwal Aug 2013 A1
20130219473 Schaefer Aug 2013 A1
20130250951 Koganti Sep 2013 A1
20130259037 Natarajan Oct 2013 A1
20130272135 Leong Oct 2013 A1
20130294451 Li Nov 2013 A1
20130301642 Radhakrishnan Nov 2013 A1
20130346583 Low Dec 2013 A1
20140013324 Zhang Jan 2014 A1
20140025736 Wang Jan 2014 A1
20140044126 Sabhanatarajan Feb 2014 A1
20140056298 Vobbilisetty Feb 2014 A1
20140105034 Sun Apr 2014 A1
20150010007 Matsuhira Jan 2015 A1
20150030031 Zhou Jan 2015 A1
20150143369 Zheng May 2015 A1
Foreign Referenced Citations (11)
Number Date Country
102801599 Nov 2012 CN
0579567 May 1993 EP
0579567 Jan 1994 EP
0993156 Dec 2000 EP
1398920 Mar 2004 EP
1916807 Apr 2008 EP
2001167 Dec 2008 EP
2008056838 May 2008 WO
2009042919 Apr 2009 WO
2010111142 Sep 2010 WO
2014031781 Feb 2014 WO
Non-Patent Literature Citations (206)
Entry
Rosen, E. et al., “BGP/MPLS VPNs”, Mar. 1999.
Office Action for U.S. Appl. No. 14/577,785, filed Dec. 19, 2014, dated Apr. 13, 2015.
Office Action for U.S. Appl. No. 13/786,328, filed Mar. 5, 2013, dated Mar. 13, 2015.
Office Action for U.S. Appl. No. 13/425,238, filed Mar. 20, 2012, dated Mar. 12, 2015.
Abawajy J. “An Approach to Support a Single Service Provider Address Image for Wide Area Networks Environment” Centre for Parallel and Distributed Computing, School of Computer Science Carleton University, Ottawa, Ontario, K1S 5B6, Canada.
Zhai F. Hu et al. 'RBridge: Pseudo-Nickname; draft-hu-trill-pseudonode-nickname-02.txt', May 15, 2012.
Mahalingam “VXLAN: A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks” Oct. 17, 2013 pp. 1-22, Sections 1, 4 and 4.1.
Office action dated Apr. 30, 2015, U.S. Appl. No. 13/351,513, filed Jan. 17, 2012.
Office Action dated Apr. 1, 2015, U.S. Appl. No. 13/656,438, filed Oct. 19, 2012.
Office Action dated May 21, 2015, U.S. Appl. No. 13/288,822, filed Nov. 3, 2011.
Siamak Azodolmolky et al. “Cloud computing networking: Challenges and opportunities for innovations”, IEEE Communications Magazine, vol. 51, No. 7, Jul. 1, 2013.
Office Action dated Apr. 1, 2015 U.S. Appl. No. 13/656,438, filed Oct. 19, 2012.
Office action dated Jun. 8, 2015, U.S. Appl. No. 14/178,042, filed Feb. 11, 2014.
Office Action Dated Jun. 10, 2015, U.S. Appl. No. 13/890,150, filed May 8, 2013.
Narten, T. et al. “Problem Statement: Overlays for Network Virtualization draft-narten-nvo3-overlay-problem-statement-01”, Oct. 31, 2011.
Knight, Paul et al. “Layer 2 and 3 Virtual Private Networks: Taxonomy, Technology, and Standardization Efforts”, 2004.
An Introduction to Brocade VCS Fabric Technology, Dec. 3, 2012.
Kreeger, L. et al. “Network Virtualization Overlay Control Protocol Requirements draft-kreeger-nvo3-overlay-cp-00”, Aug. 2, 2012.
Knight, Paul et al., “Network based IP VPN Architecture using Virtual Routers”, May 2003.
Louati, Wajdi et al., “Network-Based Virtual Personal Overlay Networks Using Programmable Virtual Routers”, 2005.
Brocade Unveils “The Effortless Network”, 2009.
The Effortless Network: HyperEdge Technology for the Campus LAN, 2012.
Foundary FastIron Configuration Guide, Software Release FSX 04.2.00b, Software Release FWS 04.3.00, Software Release FGS 05.0.00a, 2008.
FastIron and Turbulron 24x Configuration Guide, 2010.
FastIron Configuration Guide, Supporting IronWare Software Release 07.0.00, 2009.
Christensen, M. et al., Considerations for Internet Group Management Protocol (IGMP) and Multicast Listener Discovery (MLD) Snooping Switches, 2006.
Perlman, Radia et al. “RBridges: Base Protocol Specification”, <draft-ietf-trill-rbridge-protocol-16.txt>, 2010.
Brocade Fabric OS (FOS) 6.2 Virtual Fabrics Feature Frequently Asked Questions, 2009.
Eastlake III, Donald et al., “RBridges: Trill Header Options”, 2009.
Perlman, Radia “Challenges and Opportunities in the Design of TRILL: a Routed layer 2 Technology”, 2009.
Perlman, Radia et al., “RBridge VLAN Mapping”, <draft-ietf-trill-rbridge-vlan-mapping-01.txt>, 2009.
Knight, S. et al., “Virtual Router Redundancy Protocol”, 1998.
“Switched Virtual Internetworking moves beyond bridges and routers”, 8178 Data Communications 23(1994) Sep., No. 12.
Touch, J. et al., “Transparent Interconnection of Lots of Links (TRILL): Problem and Applicability Statement”, 2009.
Lapuh, Roger et al., “Split Multi-link Trunking (SMLT)”, 2002.
Lapuh, Roger et al., “Split Multi-Link Trunking (SMLT) draft-Lapuh-network-smlt-08”, 2009.
Nadas, S. et al., “Virtual Router Redundancy Protocol (VRRP) Version 3 for IPv4 and IPv6”, 2010.
Office Action for U.S. Appl. No. 12/725,249, Filing date Mar. 16, 2010, dated Sep. 12, 2012.
Office Action for U.S. Appl. No. 12/725,249, filed date Mar. 16, 2010, dated Apr. 26, 2013.
Office Action for U.S. Appl. No. 13/087,239, filed Apr. 14, 2011, dated Dec. 5, 2012.
Office Action for U.S. Appl. No. 13/087,239, filed Apr. 14, 2011, dated May 22, 2013.
Office Action for U.S. Appl. No. 13/098,490, filed May 2, 2011, dated Dec. 21, 2012.
Office Action for U.S. Appl. No. 13/098,490, filed May 2, 2011, dated Jul. 9, 2013.
Office Action for U.S. Appl. No. 13/092,724, filed Apr. 22, 2011, dated Feb. 5, 2013.
Office Action for U.S. Appl. No. 13/092,724, filed Apr. 22, 2011, dated Jul. 16, 2013.
Office Action for U.S. Appl. No. 13/092,580, filed Apr. 22, 2011, dated Jun. 10, 2013.
Office Action for U.S. Appl. No. 13/042,259, filed Mar. 7, 2011, dated Mar. 18, 2013.
Office Action for U.S. Appl. No. 13/092,460, filed Apr. 22, 2011, dated Jun. 21, 2013.
Office Action for U.S. Appl. No. 13/042,259, filed Mar. 7, 2011, dated Jul. 31, 2013.
Office Action for U.S. Appl. No. 13/092,701, filed Apr. 22, 2011, dated Jan. 28, 2013.
Office Action for U.S. Appl. No. 13/092,701, filed Apr. 22, 2011, dated Jul. 3, 2013.
Office Action for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011, dated Feb. 5, 2013.
Office Action for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011, dated Jul. 18, 2013.
Office Action for U.S. Appl. No. 13/950,974, filed Nov. 19, 2010, dated Dec. 20, 2012.
Office Action for U.S. Appl. No. 12/950,974, filed Nov. 19, 2010, dated May 24, 2012.
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011, dated Mar. 4, 2013.
Office Action for U.S. Appl. No. 13/902,877, filed Apr. 22, 2011, dated Sep. 5, 2013.
Office Action for U.S. Appl. No. 12/950,968, filed Nov. 19, 2010, dated Jun. 7, 2012.
Office Action for U.S. Appl. No. 12/950,968, filed Nov. 19, 2010, dated Jan. 4, 2013.
Office Action for U.S. Appl. No. 13/092,864, filed Apr. 22, 2011, dated Sep. 19, 2012.
Office Action for U.S. Appl. No. 13/098,360, filed Apr. 29, 2011, dated May 31, 2013.
Office Action for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011, dated Oct. 2, 2013.
Office Action for U.S. Appl. No. 13/030,806, filed Feb. 18, 2011, dated Dec. 3, 2012.
Office Action for U.S. Appl. No. 13/030,806, filed Feb. 18, 2011, dated Jun. 11, 2013.
Office action dated Apr. 26, 2012, U.S. Appl. No. 12/725,249, filed Mar. 16, 2010.
Office action dated Sep. 12, 2012, U.S. Appl. No. 12/725,249, filed Mar. 16, 2010.
Office action dated Dec. 21, 2012, U.S. Appl. No. 13/098,490, filed May 2, 2011.
Office action dated Mar. 27, 2014, U.S. Appl. No. 13/098,490, filed May 2, 2011.
Office action dated Jul. 9, 2013, U.S. Appl. No. 13/098,490, filed May 2, 2011.
Office action dated May 22, 2013, U.S. Appl. No. 13/087,239, filed Apr. 14, 2011.
Office action dated Dec. 5, 2012, U.S. Appl. No. 13/087,239, filed Apr. 14, 2011.
Office action dated Apr. 9, 2014, U.S. Appl. No. 13/092,724, filed Apr. 22, 2011.
Office action dated Feb. 5, 2013, U.S. Appl. No. 13/092,724, filed Apr. 22, 2011.
Office action dated Jan. 10, 2014, U.S. Appl. No. 13/092,580, filed Apr. 22, 2011.
Office action dated Jun. 10, 2013, U.S. Appl. No. 13/092,580, filed Apr. 22, 2011.
Office action dated Jan. 16, 2014, U.S. Appl. No. 13/042,259, filed Mar. 7, 2011.
Office action dated Mar. 18, 2013, U.S. Appl. No. 13/042,259, filed Mar. 7, 2011.
Office action dated Jul. 31, 2013, U.S. Appl. No. 13/042,259, filed Mar. 7, 2011.
Office action dated Aug. 29, 2014, U.S. Appl. No. 13/042,259, filed Mar. 7, 2011.
Office action dated Mar. 14, 2014, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011.
Office action dated Jun. 21, 2013, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011.
Office action dated Jan. 28, 2013, U.S. Appl. No. 13/092,701, filed Apr. 22, 2011.
Office action dated Mar. 26, 2014, U.S. Appl. No. 13/092,701, filed Apr. 22, 2011.
Office action dated Jul. 3, 2013, U.S. Appl. No. 13/092,701, filed Apr. 22, 2011.
Office action dated Jul. 18, 2013, U.S. Appl. No. 13/092,752, filed Apr. 22, 2011.
Office action dated Dec. 20, 2012, U.S. Appl. No. 12/950,974, filed Nov. 19, 2010.
Office action dated May 24, 2012, U.S. Appl. No. 12/950,974, filed Nov. 19, 2010.
Office action dated Jan. 6, 2014, U.S. Appl. No. 13/092,877, filed Apr. 22, 2011.
Office action dated Sep. 5, 2013, U.S. Appl. No. 13/092,877, filed Apr. 22, 2011.
Office action dated Mar. 4, 2013, U.S. Appl. No. 13/092,877, filed Apr. 22, 2011.
Office action dated Jan. 4, 2013, U.S. Appl. No. 12/950,968, filed Nov. 19, 2010.
Office action dated Jun. 7, 2012, U.S. Appl. No. 12/950,968, filed Nov. 19, 2010.
Office action dated Sep. 19, 2012, U.S. Appl. No. 13/092,864, filed Apr. 22, 2011.
Office action dated May 31, 2013, U.S. Appl. No. 13/098,360, filed Apr. 29, 2011.
Office action dated Oct. 2, 2013, U.S. Appl. No. 13/044,326, filed Mar. 9, 2011.
Office action dated Dec. 3, 2012, U.S. Appl. No. 13/030,806, filed Feb. 18, 2011.
Office action dated Apr. 22, 2014, U.S. Appl. No. 13/030,806, filed Feb. 18, 2011.
Office action dated Jun. 11, 2013, U.S. Appl. No. 13/030,806, filed Feb. 18, 2011.
Office action dated Apr. 25, 2013, U.S. Appl. No. 13/030,688, filed Feb. 18, 2011.
Office action dated Feb. 22, 2013, U.S. Appl. No. 13/044,301, filed Mar. 9, 2011.
Office action dated Jun. 11, 2013, U.S. Appl. No. 13/044,301, filed Mar. 9, 2011.
Office action dated Oct. 26, 2012, U.S. Appl. No. 13/050,102, filed Mar. 17, 2011.
Office action dated May 16, 2013, U.S. Appl. No. 13/050,102, filed Mar. 17, 2011.
Office action dated Jan. 28, 2013, U.S. Appl. No. 13/148,526, filed Jul. 16, 2011.
Office action dated Dec. 2, 2013, U.S. Appl. No. 13/184,526, filed Jul. 16, 2011.
Office action dated May 22, 2013, U.S. Appl. No. 13/148,526, filed Jul. 16, 2011.
Office action dated Aug. 21, 2014, U.S. Appl. No. 13/184,526, filed Jul. 16, 2011.
Office action dated Nov. 29, 2013, U.S. Appl. No. 13/092,873, filed Apr. 22, 2011.
Office action dated Jun. 19, 2013, U.S. Appl. No. 13/092,873, filed Apr. 22, 2011.
Office action dated Jul. 18, 2013, U.S. Appl. No. 13/365,808, filed Feb. 3, 2012.
Office Action dated Mar. 6, 2014, U.S. Appl. No. 13/425,238, filed Mar. 20, 2012.
Office action dated Nov. 12, 2013, U.S. Appl. No. 13/312,903, filed Dec. 6, 2011.
Office action dated Jun. 13, 2013, U.S. Appl. No. 13/312,903, filed Dec. 6, 2011.
Office Action dated Jun. 18, 2014, U.S. Appl. No. 13/440,861, filed Apr. 5, 2012.
Office Action dated Feb. 28, 2014, U.S. Appl. No. 13/351,513, filed Jan. 17, 2012.
Office Action dated May 9, 2014, U.S. Appl. No. 13/484,072, filed May 30, 2012.
Office Action dated May 14, 2014, U.S. Appl. No. 13/533,843, filed Jun. 26, 2012.
Office Action dated Feb. 20, 2014, U.S. Appl. No. 13/598,204, filed Aug. 29, 2012.
Office Action dated Jun. 6, 2014, U.S. Appl. No. 13/669,357, filed Nov. 5, 2012.
Huang, Nen-Fu et al., ‘An Effective Spanning Tree Algorithm for a Bridged LAN’, Mar. 16, 1992.
Office Action for U.S. Appl. No. 13/351,513, filed Jan. 17, 2012, dated Feb. 28, 2014.
Office Action for U.S. Appl. No. 13/533,843, filed Jun. 26, 2012, dated Oct. 21, 2013.
Office action dated Aug. 4, 2014, U.S. Appl. No. 13/050,102, filed Mar. 17, 2011.
Perlman, Radia et al., ‘RBridges: Base Protocol Specification; Draft-ietf-trill-rbridge-protocol-16.txt’, Mar. 3, 2010, pp. 1-117.
‘An Introduction to Brocade VCS Fabric Technology’, BROCADE white paper, http://community.brocade.com/docs/DOC-2954, Dec. 3, 2012.
U.S. Appl. No. 13/030,806 Office Action dated Dec. 3, 2012.
Office Action dated Mar. 26, 2014, U.S. Appl. No. 13/092,701, filed Apr. 22, 2011, Examiner Park, Jung H.
Office Action dated Apr. 9, 2014, U.S. Appl. No. 13/092,752, filed Apr. 22, 2011.
Brocade Brocade Unveils The Effortless Network, http://newsroom.brocade.com/press-releases/brocade-unveils-the-effortless-network-nasdaq-brcd-0859535, 2012.
Kreeger, L. et al., 'Network Virtualization Overlay Control Protocol Requirements draft-kreeger-nvo3-overlay-cp-00', Jan. 30, 2012.
Lapuh, Roger et al., ‘Split Multi-link Trunking (SMLT)’, draft-lapuh-network-smlt-08, Jul. 2008.
Office Action for U.S. Appl. No. 13/365,993, filed Feb. 3, 2012, from Cho, Hong Sol., dated Jul. 23, 2013.
Office Action for U.S. Appl. No. 12/950,974, filed Nov. 19, 2010, dated Dec. 20, 2012.
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011, dated Sep. 5, 2013.
Office Action for U.S. Appl. No. 13/092,887, dated Jan. 6, 2014.
Office Action for U.S. Appl. No. 13/098,490, filed May 2, 2011, dated Mar. 27, 2014.
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Nov. 29, 2013.
Office Action for U.S. Appl. No. 13/184,526, filed Jul. 16, 2011, dated Dec. 2, 2013.
Office Action for U.S. Appl. No. 13/598,204, filed Aug. 29, 2012, dated Feb. 20, 2014.
Office Action for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011, dated Jul. 7, 2014.
Office Action for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011, dated Apr. 9, 2014.
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Jul. 25, 2014.
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011, dated Jun. 20, 2014.
Office Action for U.S. Appl. No. 13/312,903, filed Dec. 6, 2011, dated Aug. 7, 2014.
Office Action for U.S. Appl. No. 13/351,513, filed Jan. 17, 2012, dated Jul. 24, 2014.
Office Action for U.S. Appl. No. 13/425,238, filed Mar. 20, 2012, dated Mar. 6, 2014.
Office Action for U.S. Appl. No. 13/556,061, filed Jul. 23, 2012, dated Jun. 6, 2014.
Office Action for U.S. Appl. No. 13/742,207 dated Jul. 24, 2014, filed Jan. 15, 2013.
Office Action for U.S. Appl. No. 13/950,974, filed Nov. 19, 2010, from Haile, Awet A., dated Dec. 2, 2012.
Perlman R: ‘Challenges and opportunities in the design of TRILL: a routed layer 2 technology’, 2009 IEEE Globecom Workshops, Honolulu, HI, USA, Piscataway, NJ, USA, Nov. 30, 2009, pp. 1-6, XP002649647, DOI: 10.1109/GLOBECOM.2009.5360776 ISBN: 1-4244-5626-0 [retrieved on Jul. 19, 2011].
TRILL Working Group Internet-Draft Intended status: Proposed Standard RBridges: Base Protocol Specificaiton Mar. 3, 2010.
Office action dated Aug. 14, 2014, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011.
Office action dated Jul. 7, 2014, for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011.
Office Action dated Dec. 19, 2014, for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011.
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Nov. 7, 2014.
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011.
Office Action for U.S. Appl. No. 13/157,942, filed Jun. 10, 2011.
Mckeown, Nick et al. “OpenFlow: Enabling Innovation in Campus Networks”, Mar. 14, 2008, www.openflow.org/documents/openflow-wp-latest.pdf.
Office Action for U.S. Appl. No. 13/044,301, dated Mar. 9, 2011.
Office Action for U.S. Appl. No. 13/184,526, filed Jul. 16, 2011, dated Jan. 5, 2015.
Office Action for U.S. Appl. No. 13/598,204, filed Aug. 29, 2012, dated Jan. 5, 2015.
Office Action for U.S. Appl. No. 13/669,357, filed Nov. 5, 2012, dated Jan. 30, 2015.
Office Action for U.S. Appl. No. 13/851,026, filed Mar. 26, 2013, dated Jan. 30, 2015.
Office Action for U.S. Appl. No. 13/092,460, filed Apr. 22, 2011, dated Mar. 13, 2015.
Office Action for U.S. Appl. No. 13/425,238, dated Mar. 12, 2015.
Office Action for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011, dated Feb. 27, 2015.
Office Action for U.S. Appl. No. 13/042,259, filed Mar. 7, 2011, dated Feb. 23, 2015.
Office Action for U.S. Appl. No. 13/044,301, filed Mar. 9, 2011, dated Jan. 29, 2015.
Office Action for U.S. Appl. No. 13/050,102, filed Mar. 17, 2011, dated Jan. 26, 2015.
Office action dated Oct. 2, 2014, for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011.
Kompella, Ed K. et al., ‘Virtual Private LAN Service (VPLS) Using BGP for Auto-Discovery and Signaling’ Jan. 2007.
Office Action for U.S. Appl. No. 13/030,688, filed Feb. 18, 2011, dated Apr. 25, 2013.
Office Action for U.S. Appl. No. 13/044,301, filed Mar. 9, 2011, dated Jun. 11, 2013.
Office Action for U.S. Appl. No. 13/044,301, filed Mar. 9, 2011, dated Feb. 22, 2013.
Office Action for U.S. Appl. No. 13/050,102, filed Oct. 26, 2012, dated Oct. 26, 2012.
Office Action for U.S. Appl. No. 13/050,102, filed May 16, 2013, date May 16, 2013.
Office Action for U.S. Appl. No. 13/184,526, filed Jul. 16, 2011, dated Jan. 28, 2013.
Office Action for U.S. Appl. No. 13/184,526, filed May 22, 2013, dated May 22, 2013.
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Jun. 19, 2013.
Office Action for U.S. Appl. No. 13/365,993, filed Feb. 3, 2012, dated Jul. 23, 2013.
Office Action for U.S. Appl. No. 13/365,808, filed Jul. 18, 2013, dated Jul. 18, 2013.
Office Action for U.S. Appl. No. 13/312,903, filed Dec. 6, 2011, dated Jun. 13, 2013.
Office Action dated Jun. 18, 2015, U.S. Appl. No. 13/098,490, filed May 2, 2011.
Office Action dated Jun. 16, 2015, U.S. Appl. No. 13/048,817, filed Mar. 15, 2011.
Office Action dated Jul. 31, 2015, U.S. Appl. No. 13/598,204, filed Aug. 29, 2014.
Office Action dated Jul. 31, 2015, U.S. Appl. No. 14/473,941, filed Aug. 29, 2014.
Office Action dated Jul. 31, 2015, U.S. Appl. No. 14/488,173, filed Sep. 16, 2014.
Office Action dated Aug. 21, 2015, U.S. Appl. No. 13/776,217, filed Feb. 25, 2013.
Office Action dated Aug. 19, 2015, U.S. Appl. No. 14/156,374, filed Jan. 15, 2014.
Office Action dated Sep. 2, 2015, U.S. Appl. No. 14/151,693, filed Jan. 9, 2014.
Office Action dated Sep. 17, 2015, U.S. Appl. No. 14/577,785, filed Dec. 19, 2014.
Office Action dated Sep. 22, 2015 U.S. Appl. No. 13/656,438, filed Oct. 19, 2012.
Office Action dated Nov. 5, 2015, U.S. Appl. No. 14/178,042, filed Feb. 11, 2014.
Office Action dated Oct. 19, 2015, U.S. Appl. No. 14/215,996, filed Mar. 17, 2014.
Office Action dated Sep. 18, 2015, U.S. Appl. No. 13/345,566, filed Jan. 6, 2012.
Open Flow Switch Specification Version 1.1.0, Feb. 28, 2011.
Open Flow Switch Specification Version 1.0.0, Dec. 31, 2009.
Open Flow Configuration and Management Protocol 1.0 (OF-Config 1.0) Dec. 23, 2011.
Open Flow Switch Specification Version 1.2 Dec. 5, 2011.
Office action dated Feb. 2, 2016, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011.
Office Action dated Feb. 2, 2016. U.S. Appl. No. 14/154,106, filed Jan. 13, 2014.
Office Action dated Feb. 3, 2016, U.S. Appl. No. 13/098,490, filed May 2, 2011.
Office Action dated Feb. 4, 2016, U.S. Appl. No. 13/557,105, filed Jul. 24, 2012.
Office Action dated Feb. 2, 2016, U.S. Appl. No. 14/488,173, filed Sep. 16, 2014.
Office Action dated Feb. 24, 2016, Application No. 13/971,397, filed Aug. 20, 2013.
Office Action dated Feb. 24, 2016, U.S. Appl. No. 12/705,508, filed Feb. 12, 2010.
Related Publications (1)
Number Date Country
20140198801 A1 Jul 2014 US
Provisional Applications (1)
Number Date Country
61751803 Jan 2013 US