1. Technical Field
The present disclosure relates in general to network communication and, in particular, to an improved congestion management system for packet switched networks.
2. Description of the Related Art
As is known in the art, network communication is commonly premised on the well known seven layer Open Systems Interconnection (OSI) model, which defines the functions of various protocol layers while not specifying the layer protocols themselves. The seven layers, sometimes referred to herein as Layer 7 through Layer 1, are the application, presentation, session, transport, network, data link, and physical layers, respectively.
At a source station, data communication begins when data is received from a source process at the top (application) layer of the stack of functions. The data is sequentially formatted at each successively lower layer of the stack until a data frame of bits is obtained at the data link layer. Finally, at the physical layer, the data is transmitted in the form of electromagnetic signals toward a destination station via a network link. When received at the destination station, the transmitted data is passed up a corresponding stack of functions in the reverse order in which the data was processed at the source station, thus supplying the information to a receiving process at the destination station.
The principle of layered protocols, such as those supported by the OSI model, is that, while data traverses the model layers vertically, the layers at the source and destination stations interact in a peer-to-peer (i.e., Layer N to Layer N) manner, and the functions of each individual layer are performed without affecting the interface between the function of the individual layer and the protocol layers immediately above and below it. To achieve this effect, each layer of the protocol stack in the source station typically adds information (in the form of an encapsulated header) to the data generated by the sending process as the data descends the stack. At the destination station, these encapsulated headers are stripped off one-by-one as the data propagates up the layers of the stack until the decapsulated data is delivered to the receiving process.
The physical network coupling the source and destination stations may include any number of network nodes interconnected by one or more wired or wireless network links. The network nodes commonly include hosts (e.g., server computers, client computers, mobile devices, etc.) that produce and consume network traffic, switches, and routers. Conventional network switches interconnect different network segments and process and forward data at the data link layer (Layer 2) of the OSI model. Switches typically provide at least basic bridge functions, including filtering data traffic by Layer 2 Media Access Control (MAC) address, learning the source MAC addresses of frames, and forwarding frames based upon destination MAC addresses. Routers, which interconnect different networks at the network (Layer 3) of the OSI model, typically implement network services such as route processing, path determination and path switching.
A large network typically includes a large number of switches, which operate somewhat independently. Switches within the flow path of network data traffic include an ingress switch that receives incoming data packets and an egress switch that sends outgoing data packets, and frequently further include one or more intermediate switches coupled between the ingress and egress switches, in such a network, a switch is said to be congested when the rate at which data traffic ingresses at the switch exceeds the rate at which data traffic egresses at the switch.
In conventional networks, when a switch in a data flow path is congested with data traffic, the congested switch may apply “back pressure” by transmitting one or more congestion management messages, such as a priority-based flow control (PFC) or congestion notification (CN) message, requesting other switches in the network that are transmitting data traffic to the congested switch to reduce or to halt data traffic to the congested switch. Conventional congestion management message may specify a backoff time period during which data traffic is reduced or halted, where the backoff time may be determined upon the extent of congestion experienced by the congested switch.
Conventional congestion management messages may not provide satisfactory management of network traffic, however. Conventional congestion management schemes are voluntary in that the switches sourcing the congesting data traffic are free to ignore the congestion management messages of the congested switch and to continue to transmit excess data traffic, which ultimately will be dropped by the congested switch. Further, a delay occurs between when congestion is detected by the congested switch and when the other switches of the network stop sending data traffic to the congested switch. During the delay, excess data traffic can be dropped by the congested switch. Thus, the conventional techniques of congestion management are reactionary and can require the communication protocols utilized to transport the data traffic to recover dropped data traffic. Conventional congestion management is particularly inadequate in scenarios in which the flow path of data traffic includes a large number of series-connected switches. In such cases, congestion may start at the egress switch and then continue to build toward the ingress switch in domino fashion, with data traffic being dropped all along the line. The processing and latency required to recover traffic dropped in response to congestion is further exacerbated when the data traffic is communicated with lossy lower layer protocols.
In at least one embodiment, a switching network includes first, second and third switches coupled for communication, such that the first and third switches communicate data traffic via the second switch. The first switch is operable to request transmission credits from the third switch, receive the transmission credits from the third switch and perform transmission of data traffic in reference to the transmission credits. The third switch is operable to receive the request for transmission credits from the first switch, generate the transmission credits and transmit the transmission credits to the first switch via the second switch. The second switch is operable to modify the transmission credits transmitted by the third switch prior to receipt of the transmission credits at the first switch.
With reference now to the figures and with particular reference to
Referring now to
As shown, switching network 200 comprises a plurality (and in some cases a multiplicity) of switches 202. In various embodiments, each of switches 202 may be implemented in hardware, in software, or in a combination of hardware and software.
In the present exemplary embodiment, switches 202 of switching network 200 include a first switch 202a that serves as an ingress switch for at least some data traffic of switching network 200, a third switch 202c that serves as an egress switch for at least some data traffic of switching network 200, and intermediate second, fourth and fifth switches 202b, 202d, and 202e, respectively. Thus, in the depicted exemplary switching network 200, data traffic may be forwarded between first switch 202a and third switch 202c via multiple paths, including the first path through second and fifth switches 202b, 202e and the alternative second path through fourth and fifth switches 202d, 202e.
In a switching network 200 such as that illustrated, any of switches 202 may become congested as one or more other switches 202 transmit data traffic to a switch 202 at a rate greater than that switch 202 is itself able to forward that data traffic towards its destination(s). In many switching networks 200, congestion is more frequently experienced at egress switches 200, such as third switch 200c. For example, switch 202c may become congested as switches 202h, 202d and 202e all concentrate egress data traffic at egress switch 202e, which may provide access to a server or other frequently accessed network-connected resource. As noted above, conventional congestion management messaging has been found to be inadequate in at least some circumstances because of the inherent delay in receiving the congestion management messages at network nodes transmitting the congesting data traffic (e.g., switch 202a) and because of the optional adherence of nodes (e.g., switches 202a, 202b, 202d and 202e) to the reduction of data traffic requested by such congestion management messages.
With reference now to
Switch 300 additionally includes a crossbar 310 that is operable to intelligently switch data frames from any of ingress queues 306a-306m to any of egress queues 314a-314m under the direction of switch controller 330. As will be appreciated, switch controller 330 can be implemented with one or more centralized or distributed, special-purpose or general-purpose processing elements or logic devices, which may implement control entirely in hardware, or more commonly, through the execution of firmware and/or software by a processing element.
In order to intelligently switch data frames, switch controller 330 builds and maintains one or more data plane data structures, for example, a Layer 2 forwarding information base (FIB) 332 and a Layer 3 routing information base (RIB) 334, which can be implemented, for example, as tables in content-addressable memory (CAM). In some embodiments, the contents of FIB 332 can be preconfigured, for example, by utilizing a management interface to specify particular egress ports 302 for particular traffic classifications (e.g., MAC addresses, traffic types, ingress ports, etc.) of traffic. Switch controller 330 can alternatively or additionally build FIB 332 in an automated manner by learning from observed data frames an association between ports 302 and destination MAC addresses specified by the data frames and recording the learned associations in FIB 332. Switch controller 330 thereafter controls crossbar 310 to switch data frames in accordance with the associations recorded in FIB 332. RIB 334, if present, can similarly be preconfigured or dynamically to route data packets. For example, in a embodiment in which switch 300 is a TRILL switch implemented in a TRILL network, RIB 334 is preferably preconfigured with a predetermined route through switching network 200 among multiple possible equal cost paths for each destination address. In other embodiments, dynamic routing algorithms, such as ECMP or the like, can be utilized to dynamically select (and update RIB 334 with) a route for a flow of data traffic based on Layer 3 address and/or other criteria.
Switch controller 330 additionally implements a congestion management unit 336 that can be utilized to manage congestion (including its prevention) within a switching network 200. In accordance with a preferred embodiment, congestion management unit 336 is configured to request transmission credits from an end node in a flow path through switching network 200. The transmission credits may be denominated, for example, in a quanta of data traffic, such as a count of data frames (Layer 2) or data packets (Layer 3). The transmission credits can additionally have an associated maximum rate of data traffic expressed, for example, as data frames or data packets per time period. Congestion management unit 336 is further configured to regulate transmission of the data traffic by switch 300 along the flow path in accordance with the transmission credits it has received. In addition, congestion management unit 336 is configured to generate transmission credits requested by other switches 300, as well as to modify transmission credits generated by other switches 300 in accordance with the available transmission capabilities and available bandwidth of switch 300. The congestion management implemented by congestion management unit 336 is described in greater detail below with reference to
In support of the congestion management implemented by congestion management unit 336, congestion management unit 336 preferably implements a data structure, such as transmission credit data structure 340. In the depicted embodiment, transmission credit data structure 340 includes multiple entries 342 each identifying a particular switch 202 in a switch ID (SID) field 344 and associating the identified switch 202 with a count of transmission credits specified by a transmission credit counter (TCC) 346. In response to receipt by switch 300 of transmission credits generated by a particular switch 202, congestion management unit 336 updates the TCC 346 of the relevant entry 342 (or installs a new entry 342) in transmission credit data structure 340. Congestion management unit 336 then diminishes the transmission credits reflected by a given TCC 346 as data traffic is forwarded by switch 300 to the switch 202 associated with that TCC 346. In a preferred embodiment, switch 300 is configured to not forward data traffic to a switch 202 in excess of that for which switch 300 has transmission credits. Although shown separately from FIB 332 and RIB 334 for the sake of clarity, it should be appreciated that in some embodiments, transmission credit data structure 340 can be implemented in combination with one or both of FIB 332 and RIB 334. Further, it should be appreciated that switch controller 330 can install and/or update the forwarding and routing entries in FIB 332 and RIB 334 based on the flow paths for which it has the greatest amount of available transmission credits.
As noted above, any of switches 202 may be implemented as a virtual switch by program code executed on a physical host platform. For example,
With reference now to
The process of
In response to a determination at block 504 that switch 202a needs additional transmission credits for switch 202c in order to forward data traffic, the process proceeds to block 510. Block 510 depicts first switch 202a transmitting a request for transmission credits to third switch 202c, meaning that in the embodiment of switching network 200 shown in
Next, at block 512, congestion management unit 336 determines whether or not a transmission credit grant for third switch 202e has been received within a timeout period. As with the transmission credit request, the transmission credit grant can be communicated to first switch 202a out-of-band in a proprietary control frame, out-of-band in a standards-based control frame, or can be “piggy-backed” in an in-band data frame. If a transmission credit grant has not been received within the timeout period, the process returns to block 504, which has been described, if however, congestion management unit 336 determines at block 512 that a transmission credit grant has been received, congestion management unit 336 updates the TCC 346 associated with third switch 202e with the received transmission credits (block 520).
With the transmission credits, congestion management unit 336 permits switch 202a to forward data traffic to third switch 202c, which in the embodiment of
As indicated at blocks 524 and 526, as long as first switch 202a has more data traffic to forward toward third switch 202c and has sufficient (i.e., more than a transmission threshold amount of) transmission credits to do so, first switch 202a continues to forward data traffic to third switch 202c via one or more intermediate switches, such as second switch 202b, as depicted at block 522. If the data traffic to be forwarded is exhausted prior to the available transmission credits, then the process of
Referring now to
The process of
Returning to block 610, in response to third switch 202c determining that its has bandwidth available to forward data traffic, the process proceeds from block 610 to block 614. Block 614 depicts congestion management unit 336 of third switch 202c generating a transmission credit grant specifying an amount of transmission credits in accordance with its transmission capabilities and available bandwidth. Third switch 202c thereafter transmits the transmission credit grant toward first switch 202a via second switch 202b (block 616). The process of
With reference now to
The process of
Returning to block 704, in response to second switch 202b determining that its has bandwidth available to forward data traffic in accordance with the full amount of transmission credits specified in the transmission credit grant, congestion management unit 336 does not diminish the amount of transmission credits specified in the transmission credit grant. Instead, the process proceeds directly from block 704 to block 708. Block 708 depicts second switch 202b forwarding the transmission credit grant specifying the possibly modified amount of transmission credits toward first switch 202a. The process of
As has been described, in at least one embodiment a switching network includes first, second and third switches coupled for communication, such that the first and third switches communicate data traffic via the second switch. The first switch is operable to request transmission credits from the third switch, receive the transmission credits from the third switch and perform transmission of data traffic in reference to the transmission credits. The third switch is operable to receive the request for transmission credits from the first switch, generate the transmission credits and transmit the transmission credits to the first switch via the second switch. The second switch is operable to modify the transmission credits transmitted by the third switch prior to receipt of the transmission credits at the first switch.
While the present invention has been particularly shown as described with reference to one or more preferred embodiments, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention. For example, although aspects have been described with respect to one or more machines (e.g., hosts and/or network switches) executing program code (e.g., software, firmware or a combination thereof) that direct the functions described herein, it should be understood that embodiments may alternatively be implemented as a program product including a tangible machine-readable storage medium or storage device (e.g., an optical storage medium, memory storage medium, disk storage medium, etc.) storing program code that can be processed by a machine to cause the machine to perform one or more of the described functions.
| Number | Name | Date | Kind |
|---|---|---|---|
| 5394402 | Ross | Feb 1995 | A |
| 5515359 | Zheng | May 1996 | A |
| 5617421 | Chin et al. | Apr 1997 | A |
| 5633859 | Jain et al. | May 1997 | A |
| 5633861 | Hanson et al. | May 1997 | A |
| 5742604 | Edsall et al. | Apr 1998 | A |
| 5893320 | Demaree | Apr 1999 | A |
| 6147970 | Troxel | Nov 2000 | A |
| 6304901 | McCloghrie et al. | Oct 2001 | B1 |
| 6347337 | Shah et al. | Feb 2002 | B1 |
| 6567403 | Congdon et al. | May 2003 | B1 |
| 6646985 | Park et al. | Nov 2003 | B1 |
| 6839768 | Ma et al. | Jan 2005 | B2 |
| 6901452 | Bertagna | May 2005 | B1 |
| 7035220 | Simcoe | Apr 2006 | B1 |
| 7263060 | Garofalo et al. | Aug 2007 | B1 |
| 7508763 | Lee | Mar 2009 | B2 |
| 7561517 | Klinker et al. | Jul 2009 | B2 |
| 7593320 | Cohen et al. | Sep 2009 | B1 |
| 7668966 | Klinker et al. | Feb 2010 | B2 |
| 7830793 | Gai et al. | Nov 2010 | B2 |
| 7839777 | DeCusatis et al. | Nov 2010 | B2 |
| 7848226 | Morita | Dec 2010 | B2 |
| 7912003 | Radunovic et al. | Mar 2011 | B2 |
| 7974223 | Zelig et al. | Jul 2011 | B2 |
| 8085657 | Legg | Dec 2011 | B2 |
| 8139358 | Tambe | Mar 2012 | B2 |
| 8194534 | Pandey et al. | Jun 2012 | B2 |
| 8213429 | Wray et al. | Jul 2012 | B2 |
| 8265075 | Pandey | Sep 2012 | B2 |
| 8271680 | Salkewicz | Sep 2012 | B2 |
| 8325598 | Krzanowski | Dec 2012 | B2 |
| 8345697 | Kotha et al. | Jan 2013 | B2 |
| 8406128 | Brar et al. | Mar 2013 | B1 |
| 8498284 | Pani et al. | Jul 2013 | B2 |
| 8498299 | Katz et al. | Jul 2013 | B2 |
| 8625427 | Terry et al. | Jan 2014 | B1 |
| 20020191628 | Liu et al. | Dec 2002 | A1 |
| 20030185206 | Jayakrishnan | Oct 2003 | A1 |
| 20040088451 | Han | May 2004 | A1 |
| 20040243663 | Johanson et al. | Dec 2004 | A1 |
| 20040255288 | Hashimoto et al. | Dec 2004 | A1 |
| 20050047334 | Paul et al. | Mar 2005 | A1 |
| 20060029072 | Perera et al. | Feb 2006 | A1 |
| 20060251067 | DeSanti et al. | Nov 2006 | A1 |
| 20070036178 | Hares et al. | Feb 2007 | A1 |
| 20070263640 | Finn | Nov 2007 | A1 |
| 20080205377 | Chao et al. | Aug 2008 | A1 |
| 20080225712 | Lange | Sep 2008 | A1 |
| 20080228897 | Ko | Sep 2008 | A1 |
| 20090129385 | Wray et al. | May 2009 | A1 |
| 20090185571 | Tallet | Jul 2009 | A1 |
| 20090213869 | Rajendran et al. | Aug 2009 | A1 |
| 20090252038 | Cafiero et al. | Oct 2009 | A1 |
| 20100054129 | Kuik et al. | Mar 2010 | A1 |
| 20100054260 | Pandey et al. | Mar 2010 | A1 |
| 20100158024 | Sajassi et al. | Jun 2010 | A1 |
| 20100183011 | Chao | Jul 2010 | A1 |
| 20100223397 | Elzur | Sep 2010 | A1 |
| 20100226368 | Mack-Crane et al. | Sep 2010 | A1 |
| 20100246388 | Gupta et al. | Sep 2010 | A1 |
| 20100257263 | Casado et al. | Oct 2010 | A1 |
| 20100265824 | Chao et al. | Oct 2010 | A1 |
| 20100303075 | Tripathi et al. | Dec 2010 | A1 |
| 20110007746 | Mudigonda et al. | Jan 2011 | A1 |
| 20110019678 | Mehta et al. | Jan 2011 | A1 |
| 20110026403 | Shao et al. | Feb 2011 | A1 |
| 20110026527 | Shao et al. | Feb 2011 | A1 |
| 20110032944 | Elzur et al. | Feb 2011 | A1 |
| 20110035494 | Pandey et al. | Feb 2011 | A1 |
| 20110103389 | Kidambi et al. | May 2011 | A1 |
| 20110134793 | Elsen et al. | Jun 2011 | A1 |
| 20110235523 | Jha et al. | Sep 2011 | A1 |
| 20110280572 | Vobbilisetty et al. | Nov 2011 | A1 |
| 20110299406 | Vobbilisetty et al. | Dec 2011 | A1 |
| 20110299409 | Vobbilisetty et al. | Dec 2011 | A1 |
| 20110299532 | Yu et al. | Dec 2011 | A1 |
| 20110299536 | Cheng et al. | Dec 2011 | A1 |
| 20120014261 | Salam et al. | Jan 2012 | A1 |
| 20120014387 | Dunbar et al. | Jan 2012 | A1 |
| 20120033541 | Jacob Da Silva et al. | Feb 2012 | A1 |
| 20120131662 | Kuik et al. | May 2012 | A1 |
| 20120177045 | Berman | Jul 2012 | A1 |
| 20120228780 | Kim et al. | Sep 2012 | A1 |
| 20120243539 | Keesara | Sep 2012 | A1 |
| 20120243544 | Keesara | Sep 2012 | A1 |
| 20120287786 | Kamble et al. | Nov 2012 | A1 |
| 20120287787 | Kamble et al. | Nov 2012 | A1 |
| 20120287939 | Leu et al. | Nov 2012 | A1 |
| 20120320749 | Kamble et al. | Dec 2012 | A1 |
| 20130022050 | Leu et al. | Jan 2013 | A1 |
| 20130051235 | Song et al. | Feb 2013 | A1 |
| 20130064067 | Kamath et al. | Mar 2013 | A1 |
| 20130064068 | Kamath et al. | Mar 2013 | A1 |
| Number | Date | Country |
|---|---|---|
| 1897567 | Jan 2007 | CN |
| 101030959 | Sep 2007 | CN |
| 101087238 | Dec 2007 | CN |
| 0853405 | Jul 1998 | EP |
| 0853405 | Sep 1998 | EP |
| Entry |
|---|
| Schlansker et al.,“High-Performance Ethernet-Based Communications for Future Multi-Core Processors”, Proceedings of the 2007 ACM/IEEE conference on Supercomputing, Nov. 10-16, 2007. |
| Yoshigoe et al., “RATE Control for Bandwidth Allocated Services in IEEE 802.3 Ethernet”, Proceedings of the 26th Annual IEEE Conference on Local Computer Networks, Nov. 14-16 2001. |
| Tolmie, “HIPPI-6400—Designing for speed”, 12th Annual International Symposium on High Performance Computing Systems and Applications (HPCSt98), May 20-22, 1998. |
| U.S. Appl. No. 13/107894, Non-Final Office Action Dated Jun. 20, 2013. |
| U.S. Appl. No. 13/594970, Final Office Action Dated Sep. 25, 2013 |
| U.S. Appl. No. 13/594970, Non-Final Office Action Dated May 29, 2013. |
| U.S. Appl. No. 13/107397, Final Office Action Dated May 29, 2013. |
| U.S. Appl. No. 13/107397, Non-Final Office Action Dated Jan. 4, 2013. |
| U.S. Appl. No. 13/466754, Non-Final Office Action Dated Sep. 25, 2013. |
| U.S. Appl. No. 13/229867, Non-Final Office Action Dated May 24, 2013. |
| U.S. Appl. No. 13/595047, Non-Final Office Action Dated May 24, 2013. |
| U.S. Appl. No. 13/107985, Notice of Allowance Dated Jul. 18, 2013. |
| U.S. Appl. No. 13/107985, Non-Final Office Action Dated Feb. 28, 2013. |
| U.S. Appl. No. 13/107433, Final Office Action Dated Jul. 10, 2013. |
| U.S. Appl. No. 13/107433 Non-Final Office Action Dated Jan. 28, 2013. |
| U.S. Appl. No. 13/466790, Final Office Action Dated Jul. 12, 2013. |
| U.S. Appl. No. 13/466790, Non-Final Office Action Dated Feb. 15, 2013. |
| U.S. Appl. No. 13/107554, Final Office Action Dated Jul. 3, 2013. |
| U.S. Appl. No. 13/107554, Non-Final Office Action Dated Jan. 8, 2013. |
| U.S. Appl. No. 13/229891, Non-Final Office Action Dated May 9, 2013. |
| U.S. Appl. No. 13/595405, Non-Final Office Action Dated May 9, 2013. |
| U.S. Appl. No. 13/107896, Notice of Allowance Dated Jul. 29, 2013. |
| U.S. Appl. No. 13/107896, Non-Final Office Action Dated Mar. 7, 2013. |
| U.S. Appl. No. 13/267459, Non-Final Office Action Dated May 2, 2013. |
| U.S. Appl. No. 13/267578, Non-Final Office Action Dated Aug. 6, 2013. |
| U.S. Appl. No. 13/267578, Non-Final Office Action Dated Apr. 5, 2013. |
| U.S. Appl. No. 13/314455, final Office Action Dated Aug. 30, 2013. |
| U.S. Appl. No. 13/314455, Non-Final Office Action Dated Apr. 24, 2013. |
| Martin, et al., “Accuracy and Dynamics of Multi-Stage Load Balancing for Multipath Internet Routing”, Institute of Computer Science, Univ. Of Wurzburg Am Hubland, Germany, IEEE Int'l Conference on Communications (ICC) Glasgow, UK, pp. 1-8, Jun. 2007. |
| Kinds, et al., “Advanced Network Monitoring Brings Life to the Awareness Plane”, IBM Research Spyros Denazis, Univ. Of Patras Benoit Claise, Cisco Systems, IEEE Communications Magazine, pp. 1-7, Oct. 2008. |
| Kandula, et al., “Dynamic Load Balancing Without Packet Reordering”, ACM SIGCOMM Computer Communication Review, vol. 37, No. 2, pp. 53-62, Apr. 2007. |
| Vazhkudai, et al., “Enabling the Co-Allocation of Grid Data Transfers”, Department of Computer and Information Sciences, The Univ. Of Mississippi, pp. 44-51, Nov. 17, 2003. |
| Xiao, et al. “Internet QoS: A Big Picture”, Michigan State University, IEEE Network, pp. 8-18, Mar/Apr 1999. |
| Jo et al., “Internet Traffic Load Balancing using Dynamic Hashing with Flow Volume”, Conference Title: Internet Performance and Control of Network Systems III, Boston, MA pp. 1-12, Jul. 30, 2002. |
| Schueler et al., “TCP-Splitter: A TCP/IP Flow Monitor in Reconfigurable Hardware”, Appl. Res. Lab., Washington Univ. pp. 54-59, Feb. 19, 2003. |
| Yemini et al., “Towards Programmable Networks”; Dept. of Computer Science Columbia University, pp. 1-11, Apr. 15, 1996. |
| Soule, et al., “Traffic Matrices: Balancing Measurements, Interference and Modeling”, vol. 33, Issue: 1, Publisher: ACM, pp. 362-373, Year 2005. |
| De-Leon, “Flow Control for Gigabit”, Digital Equipment Corporation (Digital), IEEE 802.3z Task Force, Jul. 9, 1996. |
| Schlansker, et al., “High-Performance Ethernet-Based Communications for Future Multi-Core Processors”, Proceedings of the 2007 ACM/IEEE conference on Supercomputing, Nov. 10-16, 2007. |
| Yoshigoe, et al., “Rate Control for Bandwidth Allocated Services in IEEE 802.3 Ethernet”, Proceedings of the 26th Annual IEEE Conference on Local Computer Networks, Nov. 14-16, 2001. |
| Tolmie, “HIPPI-6400—Designing for speed”, 12th Annual Int'l Symposium on High Performance Computing Systems and Applications (HPCSt98), May 20-22, 1998. |
| Manral, et al., “Rbridges: Bidirectional Forwarding Detection (BFD) support for TRILL draft-manral-trill-bfd-encaps-01”, pp. 1-10, TRILL Working Group Internet-Draft, Mar. 13, 2011. |
| Perlman, et al., “Rbridges: Base Protocol Specification”, pp. 1-117, TRILL Working Group Internet-Draft, Mar. 3, 2010. |
| D.E. Eastlake, “Rbridges and the IETF TRILL Protocol”, pp. 1-39, TRILL Protocol, Dec. 2009. |
| Leu, Dar-Ren, “dLAG-DMLT over TRILL”, BLADE Network Technologies, pp. 1-20, Copyright 2009. |
| Posted by Mike Fratto, “Cisco's FabricPath and IETF TRILL: Cisco Can't Have Standards Both Ways”, Dec. 17, 2010; http://www.networkcomputing.com/data-networking-management/229500205. |
| Cisco Systems Inc., “Cisco FabricPath Overview”, pp. 1-20, Copyright 2009. |
| Brocade, “BCEFE in a Nutshell First Edition”, Global Education Services Rev. 0111, pp. 1-70, Copyright 2011, Brocade Communications Systems, Inc. |
| Pettit et al., Virtual Switching in an Era of Advanced Edges, pp. 1-7, Nicira Networks, Palo Alto, California. Version date Jul. 2010. |
| Pfaff et al., Extending Networking into the Virtualization Layer, pp. 1-6, Oct. 2009, Proceedings of the 8th ACM Workshop on Hot Topics in Networks (HotNets-VIII), New York City, New York. |
| Sherwood et al., FlowVisor: A Network Virtualization Layer, pp. 1-14, Oct. 14, 2009, Deutsche Telekom Inc. R&D Lab, Stanford University, Nicira Networks. |
| Yan et al., Tesseract: A 4D Network Control Plane, pp. 1-15, NSDI'07 Proceedings of the 4th USENIX conference on Networked systems design & implementation USENIX Association Berkeley, CA, USA 2007. |
| Hunter et al., BladeCenter, IBM Journal of Research and Development, vol. 49, No. 6, p. 905. Nov. 2005. |
| VMware, Inc., “VMware Virtual Networking Concepts”, pp. 1-12, Latest Revision: Jul. 29, 2007. |
| Perla, “Profiling User Activities on Guest OSes in a Virtual Machine Environment.” (2008). |
| Shi et al., Architectural Support for High Speed Protection of Memory Integrity and Confidentiality in Multiprocessor Systems, pp. 1-12, Proceedings of the 13th International Conference on Parallel Architecture and Compilation Techniques (2004). |
| Guha et al., ShutUp: End-to-End Containment of Unwanted Traffic, pp. 1-14, (2008). |
| Recio et al., Automated Ethernet Virtual Bridging, pp. 1-11, IBM 2009. |
| Sproull et al., “Control and Configuration Software for a Reconfigurable Networking Hardware Platform”, Applied Research Laboratory, Washington University, Saint Louis, MO 63130; pp. 1-10 (or 45-54)—Issue Date: 2002, Date of Current Version: Jan. 6, 2003. |
| Papadopoulos et al.,“NPACI Rocks: Tools and Techniques for Easily Deploying Manageable Linux Clusters”, The San Diego Supercomputer Center, University of California San Diego, La Jolla, CA 92093-0505—Issue Date: 2001, Date of Current Version: Aug. 7, 2002. |
| Ruth et al., Virtual Distributed Environments in a Shared Infrastructure, pp. 63-69, IEEE Computer Society, May 2005. |
| “Rouiller, Virtual Lan Security: weaknesses and countermeasures, pp. 1-49, GIAC Security Essentials Practical Assignment Version 1.4b” Dec. 2003. |
| Walters et al., An Adaptive Heterogeneous Software DSM, pp. 1-8, Columbus, Ohio, Aug. 14-Aug. 18. |
| Skyrme et al., Exploring Lua for Concurrent Programming, pp. 3556-3572, Journal of Universal Computer Science, vol. 14, No. 21 (2008), submitted: Apr. 16, 2008, accepted: May 6, 2008, appeared: Jan. 12, 2008. |
| Dobre, Multi-Architecture Operating Systems, pp. 1-82, Oct. 4, 2004. |
| Int'l Searching Authority; Int. Appln. PCT/IB2012/051803; Int'l Search Report dated Sep. 13, 2012 (7 pg.). |
| U.S. Appl. No. 13/107893, Notice of Allowance Dated Jul. 10, 2013. |
| U.S. Appl. No. 13/107893, Non-Final Office Action Dated Apr. 1, 2013. |
| U.S. Appl. No. 13/472964, Notice of Allowance Dated Jul. 12, 2013. |
| U.S. Appl. No. 13/472964, Non-Final Office Action Dated Mar. 29, 2013. |
| U.S. Appl. No. 13/107903, Notice of Allowance Dated Sep. 11, 2013. |
| U.S. Appl. No. 13/107903, Final Office Action Dated Jul. 19, 2013. |
| U.S. Appl. No. 13/107903, Non-Final Office Action Dated Feb. 22, 2013. |
| U.S. Appl. No. 13/585446, Notice of Allowance Dated Sep. 12, 2013. |
| U.S. Appl. No. 13/585446, Final Office Action Dated Jul. 19, 2013. |
| U.S. Appl. No. 13/585446, Non-Final Office Action Dated Feb. 16, 2013. |
| U.S. Appl. No. 13/107554, Notice of Allowance Dated Oct. 18, 2013. |
| U.S. Appl. No. 13/267459, Final Office Action Dated Oct. 23, 2013. |
| U.S. Appl. No. 13/107894, Final Office Action Dated Nov. 1, 2013. |
| U.S. Appl. No. 13/594993, Non-Final Office Action Dated Oct. 25, 2013. |
| Number | Date | Country | |
|---|---|---|---|
| 20130088959 A1 | Apr 2013 | US |