The present disclosure relates to networking and more particularly to techniques for communicating messages between processing entities on a network device.
A network device may have multiple processing entities within the device. In a distributed software model, each processing entity may execute one or more applications running on an operating system and network system. The network system may comprise a network stack, such as an OSI network stack of networking layer protocols. Different instantiations of an application may run on multiple processing entities within the network device, and application messages may be communicated between the instantiations using messaging schemes supported by the networking layer protocols.
The multiple processing entities may provide redundancy to the network device to avoid traffic disruption upon a failure event, wherein a failover should occur to switch processing to a redundant or standby processing entity. In some network devices, there is a need for high failover capability in order to provide high availability (HA) or continuous availability messaging to ensure hitless failover. Typically, applications that support HA messaging need to ensure redundancy for all permutations of failures at the processing entities of the network device. To avoid losing critical messages during a failover, an application needs to guarantee that messages can be delivered regardless of which end (i.e., the source or the destination) is failing over. This typically requires an application to include additional software to handle the various failover permutations. Thus, multiple applications running on a network device may each need to implement its own software to support HA messaging.
Certain embodiments of the present invention enable application message delivery to be automatically guaranteed for all failover scenarios through use of a novel infrastructure layer that supports HA messaging. The High Availability Application Messaging Layer (HAML) can guarantee delivery of application messages whether a failover occurs at one or both of the source and the intended destination of the message. The HAML may be used to transmit messages to one or more intended destinations. Accordingly, the HAML may be used for unicast messaging or for multicast messaging. In some embodiments, the HAML may be HA aware, which refers to the awareness of the HAML of the redundancy for all processing entities within a network device to ensure hitless failover at the network device. By moving support for HA messaging from individual applications to the HAML, as a common infrastructure layer across the processing entities, the individual applications do not need to implement additional software to explicitly support HA messaging.
In one embodiment, a network device comprises a first processing entity, a second processing entity, a third processing entity, and a fourth processing entity. The first processing entity is configurable to operate in a first role and to transmit a message for an intended destination, where the first processing entity is the source of the message. The second processing entity is configurable to operate in a second role, to receive the message, and to store the message at the second processing entity, where the second processing entity is a peer to the source of the message. The third processing entity is configurable to operate in the first role and to receive the message, where the third processing entity is the intended destination of the message. The fourth processing entity is configurable to operate in the second role, to receive the message, and to store the message at the fourth processing entity, where the fourth processing entity is a peer to the intended destination of the message.
In certain embodiments, the first role is an active role, wherein a processing entity operating in the first role is further configurable to perform a set of transport-related functions in the active role; and the second role is a standby role, wherein a processing entity operating in the second role is further configurable to not perform the set of transport-related functions in the standby role. In certain embodiments, the first processing entity is further configurable to receive an acknowledgement indicating that the message was received at the third processing entity and at the fourth processing entity, and in response to receiving the acknowledgement, to transmit a notification to the second processing entity to remove the message stored at the second processing entity; and the second processing entity is further configurable to receive the notification, and in response to receiving the notification, to remove the message stored at the second processing entity. The fourth processing entity may be further configurable to switch to operation in the first role from the second role when the third processing entity is no longer operating in the first role, to read the message, and to process the message.
In certain embodiments, the third processing entity is further configurable to read the message, to process the message, and after processing the message, to transmit a notification to the fourth processing entity to remove the message stored at the fourth processing entity; and the fourth processing entity is further configurable to receive the notification, and in response to receiving the notification, to remove the message stored at the fourth processing entity. In certain embodiments, the first processing entity is further configurable to block control, to receive an acknowledgement indicating that the message was received at the second processing entity, and in response to receiving the acknowledgement, to unblock control. The second processing entity may be further configurable to switch to operation in the first role from the second role when the first processing entity is no longer operating in the first role, and to transmit the message for the intended destination.
In certain embodiments, the first processing entity is further configured to receive an error notification indicating that the message was not received at the third processing entity. In certain embodiments, the message is for multiple intended destinations; and the first processing entity is further configurable to transmit the message to each intended destination of the multiple intended destinations, and to transmit the message to each peer to each intended destination of the multiple intended destinations.
In one embodiment, a method comprises transmitting a message for an intended destination from a first processing entity operating in a first role, where the first processing entity is the source of the message; receiving the message at a second processing entity operating in a second role, where the message is stored at the second processing entity, and the second processing entity is a peer to the source of the message; receiving the message at a third processing operating in the first role, where the third processing entity is the intended destination of the message; and receiving the message at a fourth processing entity operating in the second role, where the message is stored at the fourth processing entity, and the fourth processing entity is a peer to the intended destination of the message.
In one embodiment, a network device comprises a first processing entity and a second processing entity. The first processing entity is configurable to operate in a first role and to transmit a message for an intended destination. The second processing entity is configurable to operate in a second role and to receive the message. Upon occurrence of a failure event at the first processing entity, the second processing entity is configurable to switch to operating in the first role to determine that the second processing entity is a source of the message based on the second processing entity operating in the first role, and to transmit the message to the intended destination.
In one embodiment, a network device comprises a first processing entity and a second processing entity. The first processing entity is configurable to operate in a first role, where the first processing entity is an intended destination of a message. The second processing entity is configurable to operate in a second role and to receive the message. Upon occurrence of a failure event at the first processing entity, the second processing entity is configurable to switch to operating in the first role to determine that the second processing entity is the intended destination based on the second processing entity operating in the first role, and to process the message as the intended destination.
The foregoing, together with other features and embodiments will become more apparent upon referring to the following specification, claims, and accompanying drawings.
Attached as the Appendix are example application programming interfaces (APIs) for a High Availability Application Messaging Layer (HAML) that may be implemented in accordance with embodiments of the present invention.
It should be understood that the specific embodiments described in the Appendix are not limiting examples of the invention and that some aspects of the invention might use the teachings of the Appendix while others might not. It should also be understood that limiting statements in the Appendix may be limiting as to requirements of specific embodiments and such limiting statements might or might not pertain to the claimed inventions and, therefore, the claim language need not be limited by such limiting statements.
In the following description, for the purposes of explanation, specific details are set forth in order to provide a thorough understanding of certain embodiments of the invention. However, it will be apparent that various embodiments may be practiced without these specific details. The figures and description are not intended to be restrictive.
Certain embodiments of the present invention enable application message delivery to be automatically guaranteed for all failover scenarios through use of a novel infrastructure layer that supports HA messaging. The HAML can guarantee delivery of application messages whether a failover occurs at one or both of the source and the intended destination of the message. The HAML may be used to transmit messages to one or more intended destinations. Accordingly, the HAML may be used for unicast messaging or for multicast messaging. The HAML is fully reentrant and HA aware, which refers to the awareness of the HAML of the redundancy for all processing entities within a network device to ensure hitless failover at the network device. By moving support for HA messaging from individual applications to the HAML, as a common infrastructure layer across the processing entities, the individual applications no longer need to implement additional software to explicitly support HA messaging.
The HAML guarantees delivery of an application message in a source failover scenario by automatically transmitting the message to, and storing the message at, a peer for the source of the message. The HAML transmits the message to the source peer automatically without the application needing to explicitly transmit the message to the source peer directly. If a failure event then occurs at the source, the source peer can transmit the message to the destination, ensuring delivery. Further explanations are provided below for a source, a destination, and a peer.
Similarly, the HAML guarantees delivery of an application message in a destination failover scenario by automatically transmitting the message to, and storing the message at, a peer for each of one or more intended destinations (e.g., the one or more destinations designated or specified in the message). The HAML automatically multicasts (i.e., transmits at the same time) the message to each intended destination and each destination peer without the application needing to explicitly transmit the message to the destination peers directly. If a failure event then occurs at an intended destination, the respective destination peer can process the message in lieu of processing by the affected intended destination.
In certain embodiments, the HAML may be implemented as a library interface, which may be linked to by user space applications running on a network device. In certain embodiments, messages are delivered to each destination in the same order that the messages were sent. In some embodiments, application messages sent using the HAML may be idempotent (i.e., the messages produce the same result if processed one or more times), as duplicate messages may be received by an application in the event of a failover. However, it is expected that the application would discard the duplicate messages. In other embodiments, the HAML may ensure duplicate messages are not delivered to the application. In some embodiments, errors may be reported asynchronously, for example, if message synchronization between peers is lost, or a destination is no longer able to accept messages.
In some embodiments, the source 110, the source peer 115, the destination 120, and the destination peer 125 are each a processing entity of a plurality of processing entities of network device 100. Processing entities may include, but are not limited to, physical processing units, logical processing units, or virtual processing entities. In one implementation, processing entities may include a group of one or more processing units, control circuits, and associated memory. For instance, a processing entity may be a management card or a line card of a network device. Alternatively, a processing entity may be one of multiple processing entities of a management card or a line card of a network device. In another implementation, a processing entity may include a processing unit, such as an AIM, Intel, AMD, ARM, TI, or Freescale Semiconductor, Inc. single-core or multicore processor, or an application-specific integrated circuit (ASIC) or a field programmable gate array (FPGA) running on a management card or a line card. In yet another implementation, the processing entity may include a logical processing unit within a physical processing unit. In yet another implementation, the processing entity may be a virtual processing entity or a software partitioning, such as a virtual machine, hypervisor, software process or an application running on a processing unit, such as a processor.
Each of the source 110, the source peer 115, the destination 120, and the destination peer 125 depicted in
In certain embodiments, each processing entity of the network device 100 operates in one of multiple roles. An individual processing entity may be configured or configurable to operate in one or more of those multiple roles. In some embodiments, a processing entity may be configured or configurable to retain hardware awareness, which may refer to the awareness of the role in which the processing entity is currently operating. In some embodiments, hardware awareness is supported by the message transport used by the HAML, such as a Messaging Interface (MI) layer as described in Chin.
In one embodiment, the roles of the processing entities may include an active role and a standby role of the active-standby model used to enhance the availability of the network device. According to the active-standby model, a network device may comprise two processing entities where one of the processing entities is configured or configurable to operate in an “active” mode and the other is configured or configurable to operate in a “passive” (or standby) mode. The processing entity operating in the active mode (referred to as the active processing entity) is generally configured or configurable to perform a full set of networking functions, while the processing unit operating in passive or standby mode (referred to as the passive or standby processing entity) is configured or configurable to not perform the full set of networking functions or to perform only a small subset of the functions performed by the active processing entity. Upon an event that causes the active processing entity to reboot or fail (referred to as a switchover or failover event), which may occur, for example, due to an error in the active processing entity, the passive processing entity starts to operate in active mode and starts to perform functions that were previously performed by the previous active processing entity. The previous active processing entity may start to operate in standby mode. Processing entities that are operating in active mode may thus be operating in the active role and processing entities operating in the passive or standby mode may thus be operating in the passive or standby role.
In some embodiments, the application 130 uses the HAML 140 by calling APIs implemented to perform the HAML functions. The Appendix provides example APIs for the HAML that may be implemented in accordance with an embodiment of the present invention. Example APIs are included for opening an HAML endpoint, sending messages to destination and destination peer endpoints, receiving messages, notifying the HAML of completed processing of a message, and closing of an HAML endpoint. Specific embodiments described in the Appendix are not limiting examples of the invention.
At 202, at the source 110, the application 130 generates a message and sends the message to the HAML 140, which transmits the message to the source peer 115 and blocks the application 130 running on the source 110. For example, the HAML 140 can transmit the message down the local OSI network stack of the source 110, through a bus interconnecting the processing entities of the network device 100, and up the OSI network stack of source peer 115. In some embodiments, the HAML 140 transmits the message down the local OSI network stack using an MI layer protocol as described in Chin. The application 130 may cause the HAML 140 to transmit the message, for example, by calling the haml_sendmsg( ) API of the Appendix. In some embodiments, the source 110 is operating in a first role of multiple roles. For example, the source 110 may be operating in an active role. In some embodiments, the message includes information indicative of a role or state or function performed by the destination 120.
At 204, at the source peer 115, the HAML 140 receives the message and stores the message. In some embodiments, the message is stored in a pending queue of the source peer 115. The message is stored at the source peer 115 to ensure that a copy of the message exists for transmission in the event that a failure event occurs at the source 110 before the source 110 can transmit the message to the destination 120. In some embodiments, the source peer 115 is operating in a second role of multiple roles. For example, the source peer 115 may be operating in a passive or standby role, wherein the source peer 115 can switch to an active role upon a failure event occurring at its peer, the source 110.
In some embodiments, messages pending in the HAML 140 running on the source 110 may be synchronized to the HAML 140 running on the source peer 115 when the source peer 115 first comes online, e.g., after a reboot. In some embodiments, the source peer 115 will not process any messages until this reconciliation with the source 110 is completed in order to avoid transmitting messages out of order. If messages pending in the HAML 140 running on the source 110 cannot be synchronized to the HAML 140 running on the source peer 115, sync may be declared lost. When this occurs, sync may be restored, for example, by rebooting the source peer 115.
At 206, the source peer 115 transmits an acknowledgment to the source 110 indicating that the message was received at the source peer 115. In some embodiments, the acknowledgement is sent by the HAML 140 running on the source peer 115. In other embodiments, the acknowledgment is sent by a different networking layer, e.g., an MI layer as described in Chin.
At 208, at the source 110, the HAML 140 receives the acknowledgment transmitted at 206, and in response, unblocks (i.e., returns control to) the application 130. In some embodiments, this is an asynchronous send of the message, in that control can be returned to the application 130 running on the source 110 without waiting for the destination 120 to acknowledge receiving the message. Alternatively, if the application 130 needs to know that the destination 120 received the message, the send may be synchronous, wherein the HAML 140 will not unblock (i.e., return control to) the application 130 until the HAML 140 receives an acknowledgement that the destination 120 received the message.
In some embodiments, the application 130 running on the source 110 can batch messages. All messages except for the final message of the batch can be sent as non-blocking Following transmission of each message except for the final message, control will be returned to the application 130 without waiting for any acknowledgements, including acknowledgment that the source peer 115 received the message. Only the final message of the batch needs to receive the acknowledgement transmitted at 206 indicating that the message was received at the source peer 115. Since messages are guaranteed to be delivered in order, acknowledgment received for the final message implies that all other messages of the batch have been received. This provides the benefit of reducing overall latencies at the source 110 and allowing the source 110 to synchronize at key points.
At 210, at the source 110, the HAML 140 multicasts (i.e., transmits at the same time) the message to both the destination 120 and the destination peer 125; and the destination 120 and the destination peer 125 receive the message. The destination peer 125 stores the message (e.g., in a pending queue of the destination peer 125) to ensure that a copy of the message exists for processing in the event that a failure event occurs at the destination 120 before the destination 120 can process the message. In some embodiments, the HAML 140 multicasts the message using an MI layer as described in Chin. In some embodiments, the HAML 140 transmits the message to the source peer 115, the destination 120, and the destination peer 125 simultaneously.
In some embodiments, the message includes information indicative of the role in which the intended (e.g., designated) destination of the message is operating. For example, the application 130 may specify that the message is to be transmitted to both the active destination (e.g., destination 120 operating in a first role, the active role) and the passive or standby destination (e.g., the peer destination 125 operating in a second role, the passive or standby role). Alternatively, the application 130 may specify that the message is only to be transmitted to the active destination (e.g., destination 120). In some embodiments, the application 130 running on the source 110 intends the message to be sent to multiple destinations, wherein at 210, the HAML 140 multicasts the message to the multiple intended (e.g., designated) destinations (e.g., multiple destinations 120 not shown in
At 212, the destination 120 and the destination peer 125 transmit acknowledgments to the source 110 indicating that the message was received at the destination 120 and the destination peer 125, respectively. In some embodiments, the acknowledgements are transmitted by the HAML 140 running on the destination 120 and the destination peer 125. In other embodiments, the acknowledgments are transmitted by a different networking layer, e.g., the MI layer described in Chin. In some embodiments, a single acknowledgment is transmitted to the source 110 to indicate that the message was received at both the destination 120 and the destination peer 125.
In some embodiments, messages that are not yet processed by the application 130 running on the destination 120 may be synchronized to the HAML 140 running on the destination peer 125 when the destination peer 125 first comes online, e.g., after a reboot. In some embodiments, the destination peer 125 will not process any messages until this reconciliation with the destination 120 is completed in order to avoid receiving messages out of order. If messages that are not yet processed by the application 130 running on the destination 120 cannot be synchronized to the HAML 140 running on the destination peer 125, sync may be declared lost. When this occurs, sync may be restored, for example, by rebooting the destination peer 125.
In some embodiments, if the destination 120 and the destination peer 125 do not receive the message multicast at 210 and/or do not transmit acknowledgments to the source 110 indicating that the message was received, the HAML 140 running on the source 110 may transmit an error notification to the application 130 indicating that an error occurred. The error notification may be transmitted when the message cannot be delivered to any of one or more destinations or any of the peers to the one or more destinations. An error may occur, for example, when the receive queue of a destination is full or the destination is experiencing congestion. A slow receiver can cause this error to occur. In some embodiments, the HAML 140 receives backpressure notification (e.g., from an MI layer described in Chin) if a destination is experiencing congestion. Failure events may also have occurred at both the destination 120 (e.g., the active processing entity) and the destination peer 125 (e.g., the standby processing entity). An error may also occur if an intended (e.g., designated) destination of the message does not exist. The error notification may include information identifying the destination at which the message was not received and information identifying the type of error. The error notification may be transmitted asynchronously to when the original message was transmitted.
At 214, at the source 110, the HAML 140 receives the acknowledgments transmitted at 212, and in response, transmits a notification to the source peer 115 to remove the message at the source peer 115; and at the source peer 115, the HAML 140 receives the notification to remove the message. Once the acknowledgments are received indicating that the message was safely delivered, the message no longer needs to be stored for possible retransmission by the source peer 115. With a synchronous send, the HAML 140 running on the source 110 unblocks the application 130 when it receives the acknowledgments transmitted at 212.
At 216, at the source peer 115, the HAML 140, in response to receiving the notification, removes the message stored at the source peer 115. The sending of the message is complete at this point, and the message will not be resent if a source failover occurs. In some embodiments, if the source peer 115 is also an intended destination of the message, the HAML 140 will send the message to the application 130 to be read and processed. In some embodiments, the application 130 running on the source peer 115 can receive, read, and process the message at any time after the message is received by the HAML 140 at 204.
At 218, at the destination 120, the HAML 140 sends the message to the application 130, where the message is read and processed. After the application 130 has completed processing the message, the application 130 notifies the HAML 140 that processing is complete. In some embodiments, any operations to synchronize the destination peer 125 with the destination 120 that may be triggered by the message need to be completed by the application 130 before the HAML 140 is notified that message processing is complete. The application 130 may notify the HAML 140 that processing is complete, for example, by calling the haml_msgdone( ) API of the Appendix.
At 220, in response to being notified that message processing is complete, the HAML 140 running on the destination 120 transmits a notification to the destination peer 125 to remove the message stored at the destination peer 125; and at the destination peer 125, the HAML 140 receives the notification to remove the message. Once processing of the message is completed at the destination 120, the message no longer needs to be stored for possible processing by the destination peer 125. In some embodiments, messages can be marked as not needing the application 130 running on the destination 120 to notify the HAML 140 that message processing is complete. For example, notification that the HAML 140 has completed message processing may not be needed in full destination HA messaging mode, which is described further below. In this mode, the destination 120 and the destination peer 125 are both intended destinations of the message, and each will process the message independently of the other.
At 222, at the destination peer 125, the HAML 140, in response to receiving the notification, removes the message stored at the destination peer 125. In some embodiments, if the destination peer 125 is also an intended destination of the message, the HAML 140 may send the message to the application 130 to be read and processed. In some embodiments, the application 130 running on the destination peer 125 can receive, read, and process the message once the HAML 140 running on the destination peer 125 receives the message, and does not need to wait for notification of completed message processing by the destination 120. This may occur, for example, when operating in full destination HA messaging mode, where the destination 120 and the destination peer 125 process the message independently of each other.
Although a failover at the source 110 or the destination 120 is not depicted in
An example is now provided in which a failure event occurs at the source 110.
At 402, at the source 110, the application 130 generates a message and sends the message to the HAML 140, which transmits the message to the source peer 115 and blocks the application 130.
At 404, at the source peer 115, the HAML 140 receives the message and stores the message. The message is stored at the source peer 115 to ensure that a copy of the message exists for transmission in the event that a failure event occurs at the source 110 before the source 110 can transmit the message to the destination 120. If a failure occurs at the source 110 before the message has been synced (i.e., received and stored by the source peer 115), the message is lost, and the application 130 should consider the message as not being transmitted. However, the application 130 should not assume that the destination 120 did not receive the message. If a source failover has not yet occurred, and the HAML 140 stores the message at the source peer 115 (e.g., in a pending queue), delivery of the message is guaranteed from this point onwards.
At 406, the source peer 115 transmits an acknowledgment to the source 110 indicating that the message was received at the source peer 115. A failure event at the source 110 may occur before the source peer 115 transmits this acknowledgment at 406. Thus, because this step may not occur before the source failover, the step is depicted in
At 408, at the source 110, the HAML 140 receives the acknowledgment transmitted at 406, and in response, unblocks the application 130. Like 406, a failure event at the source 110 may occur before this step is performed. Thus, because this step may not occur before the source failover, the step is depicted in
At 410, at the source 110, the HAML 140 multicasts (i.e., transmits at the same time) the message to both the destination 120 and the destination peer 125; and the destination 120 and the destination peer 125 receive the message. The destination peer 125 stores the message. Like 406 and 408, a failure event at the source 110 may occur before this step is performed, and thus, the step is depicted in
At 412, the source 110 has a failure event. When this occurs, the source 110, which may have previously operated in a first role (e.g., an active role), may no longer operate in that first role. In some embodiments, the source 110 then switches to a second role (e.g., a passive or standby role).
At 414, the source peer 115 switches role to act as the new source for the message. For example, the source peer 115 may have previously operated in a second role (e.g., the passive or standby role), but upon the failure event occurring at the source 110, the source peer 115 switches to operate in the first role (e.g., the active role), as the new source.
At 416, at the source peer 115 now acting as the new source, the HAML 140 multicasts (i.e., transmits at the same time) the message to both the destination 120 and the destination peer 125; and the destination 120 and the destination peer 125 receive the message. In some embodiments, the application 130 is idempotent and can properly handle duplicate messages if they are received, for example, if the failover occurs after 410 but before step 212 of
At 418, the destination 120 and the destination peer 125 transmit acknowledgments to the source peer 115, as the new source, indicating that the message was received at the destination 120 and the destination peer 125, respectively. The destination peer 125 stores the message to ensure that a copy of the message exists for processing in the event that a failure event occurs at the destination 120 before the destination 120 can process the message.
From this point, the process flow can continue on from step 218 through step 222 of
Not only can the message source failover, the message destination can also failover. The HAML handles the destination failover problem by automatically multicasting messages to both the intended destination (e.g., the active destination) and the destination peer (e.g., the passive or standby destination). Thus, the HAML keeps the message queue of the destination peer synchronized with the message queue of the destination. When a destination failover occurs, the receive queue of the destination peer is fully synchronized, and the applications on the destination peer, now the new destination, can begin processing messages without needing to take any other actions, such as requesting retransmission of any messages. If the message is intended for multiple destinations, the message may be multicast to each of those intended destinations (e.g., the active destinations) and to each peer to those intended destinations (e.g., the passive or standby destinations).
An example is now provided in which a failure event occurs at the destination 120.
At 602, at the source 110, the application 130 generates a message and sends the message to the HAML 140, which transmits the message to the source peer 115 and blocks the application 130. At 604, at the source peer 115, the HAML 140 receives and stores the message. At 606, the source peer 115 transmits an acknowledgment to the source 110 indicating that the message was received at the source peer 115. At 608, at the source 110, the HAML 140 receives the acknowledgment transmitted at 206, and in response, unblocks the application 130.
At 610, at the source 110, the HAML 140 multicasts (i.e., transmits at the same time) the message to both the destination 120 and the destination peer 125; and the destination 120 and the destination peer 125 receive the message. The destination peer 125 stores the message to ensure that a copy of the message exists for processing in the event that a failure event occurs at the destination 120 before the destination 120 can process the message. If a destination failover has not yet occurred, and the HAML 140 stores the message at the destination peer 125 (e.g., in a pending queue), processing of the message is guaranteed from this point onwards.
At 612, the destination 120 and the destination peer 125 transmit acknowledgments to the source 110 indicating that the message was received at the destination 120 and the destination peer 125, respectively. At 614, at the source 110, the HAML 140 receives the acknowledgments transmitted at 612, and in response, transmits a notification to the source peer 115 to remove the stored message; and at the source peer 115, the HAML 140 receives the notification to remove the message. At 616, at the source peer 115, the HAML 140, in response to receiving the notification, removes the stored message. In some scenarios, the destination failure event may occur before one or more of steps 612, 614, and 616. Thus, steps 612, 614, and 616 are depicted in
At 618, the destination 120 has a failure event. When this occurs, the destination 120, which may have previously operated in a first role (e.g., an active role), may no longer operate in that first role. In some embodiments, the destination 120 then switches to a second role (e.g., a passive or standby role).
At 620, the destination peer 125 switches role to act as the new destination for the message. For example, the destination peer 125 may have previously operated in a second role (e.g., the passive or standby role), but upon the failure event occurring at the destination 120, the destination peer 125 switches to operate in the first role (e.g., the active role), as the new destination.
At 622, at the destination peer 125 now acting as the new destination, the HAML 140 sends the message to the application 130, where the message is read and processed. After the application 130 has completed processing the message, the application 130 may notify the HAML 140 that processing is complete.
In some embodiments, the application 130 is idempotent and can properly handle duplicate messages if they are received. For example, the synchronization message from the destination 120, now the old destination, may not have been received before the failover occurred. In some embodiments, the HAML 140 may prevent duplicate messages from being delivered to the application 130.
In some embodiments, the HAML may provide multiple message delivery modes to facilitate different messaging requirements of applications running on processing entities of a network device. Modes may be provided for different levels of HA messaging support in the sending of messages, and different levels of HA messaging support in the delivering of messages.
A first mode, which may be described as providing source HA messaging with passive destination HA messaging, is generally described in the embodiments above. In this mode, an application message is delivered to the source peer before the source is unblocked. The message is multicast to one or more destinations (e.g., active destinations) and the peers of the one or more destinations (e.g., passive or standby destinations). Only the one or more destinations process the message. That is, the one or more destination peers do not process the message unless a destination failover occurs. When the HAML is notified that the processing of the message is completed on a destination, the stored message will be removed from the respective destination peer. It is expected that a destination will perform any needed HA messaging synchronization with its destination peer.
A second mode may be described as providing source HA messaging with full destination HA messaging. In this mode, messages are processed at the one or more destinations and the peers of the one or more destinations. As with the first mode, an application message is delivered to the source peer before the source is unblocked, and the message is multicast to all the destinations and their peers. The destination and its destination peer will process the message independently of each other. In this mode, the HAML does not need to be notified that the processing of the message is completed, because the message is not stored at the destination peer.
A third mode may be described as providing source HA messaging without destination HA messaging. In this mode, a message is transmitted only to one or more destinations (e.g., active destinations) but not to any peers of those one or more destinations (e.g., passive or standby destinations). As with the first mode, an application message is delivered to the source peer before the source is unblocked. However, the message is received at one or more destinations, while the one or more destination peers will not receive the message. In this mode, the HAML does not need to be notified that the processing of the message is completed, because the message is not stored at any destination peers.
A fourth mode may be described as not providing source HA messaging while providing passive destination HA messaging. In this mode, an application message is not delivered to the source peer. The message is multicast to one or more destinations (e.g., active destinations) and the peers of the one or more destinations (e.g., passive or standby destinations). The source is unblocked after the message is transmitted to the destinations. Only the one or more destinations process the message; the one or more destination peers do not process the message unless a destination failover occurs. When the HAML is notified that the processing of the message is completed on a destination, the stored message will be removed from the respective destination peer. It is expected that a destination will perform any needed HA messaging synchronization with its destination peer.
A fifth mode may be described as not providing source HA messaging while providing full destination HA messaging. In this mode, an application message is not delivered to the source peer. The message is multicast to one or more destinations (e.g., active destinations) and the peer(s) of the one or more destinations (e.g., passive or standby destinations). The source is unblocked after the message is transmitted to the destinations. The destination and its destination peer will process the message independently of each other. In this mode, the HAML does not need to be notified that the processing of the message is completed, because the message is not stored at the destination peer.
A sixth mode may be described as disabling both source HA messaging and destination HA messaging. In this mode, an application message is not delivered to the source peer or to any destination peers (e.g., passive or standby destinations). Applications may use this mode to transmit non-critical messages to one or more destinations. The source is unblocked after the message is transmitted to the one or more destinations. Only the one or more destinations receive and process the message. In this mode, the HAML does not need to be notified that the processing of the message is completed, because the message is not stored at any destination peers.
In the embodiment depicted in
The slots on the chassis of network device 700 may have identifiers. For example, the slots occupied by the line cards of network device 700 are identified as LC slot 1, LC slot 2, and LC slot 3. In one implementation, each card of the network device 700 is associated with a unique slot identifier. For example, line card 706 is associated with a unique slot identifier LC slot 1. Line card 706 may have multiple processing entities, such as a first processing entity 712 and a second processing entity 714 depicted in
Network device 700 is configured or configurable to receive and forward data using ports. Upon receiving a data packet via an input port, network device 700 is configured or configurable to determine an output port to be used for transmitting the data packet from the network device 700 to facilitate communication of the packet to another network device or network. Within network device 700, the packet is forwarded from the input port to the determined output port and transmitted from network device 700 using the output port. In one embodiment, forwarding of packets from an input port to an output port is performed by one or more line cards. Line cards represent the data forwarding plane of network device 700. Each line card may comprise one or more processing entities that are each configured or configurable to perform forwarding of data packets. A processing entity on a line card may also be referred to as a line card processing entity. Each line card processing entity may have an associated packet processor (e.g., a processor or a core) and associated memories or portions of memories to facilitate the packet forwarding process. Since processing performed by a packet processor needs to be performed at a high packet rate in a deterministic manner, the packet processor is generally a dedicated hardware device configured to perform the processing. In one embodiment, the packet processor is a programmable logic device such as an FPGA. The packet processor may also be an ASIC.
The management cards 702 and 704 are configured or configurable to perform management and control functions for network device 700 and thus represent the management plane for network device 700. In one embodiment, management cards 702 and 704 are communicatively coupled to line cards via bus 724 and include software and hardware for controlling various operations performed by the line cards. In one embodiment, more than one management card (e.g., management cards 702 and 704) may be used, with each management card controlling one or more line cards. In alternative embodiments, a single management card may be used for all the line cards in a network device.
The management cards 702 and 704 may each comprise one or more processing entities that are each configured or configurable to perform functions performed by the management card and associated memory. Each processing entity of a management card may have an associated processor (also referred to as a management processor) and associated memories or portions of memories to perform management and control functions. In one embodiment, a management processor is a general purpose single-core or multicore microprocessor such as ones provided by AIM, Intel, AMD, ARM, TI, Freescale Semiconductor, Inc., and the like, that operates under the control of software stored in associated memory or portions of memory.
In the embodiment depicted in
The volatile memory 804 of
One or more of the management cards 702 and 704 and/or line cards 706, 708, and 710 of network device 700 of
Embodiments of the invention enable reliable communication between the various processing entities within the network device 700 using the HAML protocol. In one exemplary configuration of network device 700, the network device 700 has an active management card 702 and a passive or standby management card 704. As shown in
During normal operation of the network device 700, one of the two management cards 702 and 704 operates in an active role while the other management card operates in a passive or standby role. When operating in active mode, a management card is referred to as the active management card and is responsible for performing the control and forwarding functions for network device 700. The processing entity of the active management card operates as the active processing entity. When operating in standby mode, a management card is referred to as the standby management card and does not perform, or performs just a subset of, the control and forwarding functions performed by the active management card. The processing entity of the standby management card operates as the standby processing entity. In the embodiment depicted in
In other embodiments, the management cards 702 and 704 each comprise two processing entities, wherein one processing entity at each of the management cards 702 and 704 operates in active mode, while the other processing entity at each of the management cards 702 and 704 operates in passive or standby mode. A failover or switchover occurring in one of the two management cards 702 or 704 would cause the standby processing entity of the affected management card to become the active processing entity, and cause the active processing entity of the affected management card to become the standby processing entity.
Each of the line cards 706, 708, and 710 of the network device 700 has two processing entities, although line cards may have fewer or more processing entities in other embodiments. When operating in active mode, a processing entity of a line card, referred to herein as an active processing entity, is responsible for providing packet forwarding services for network device 700. When operating in passive or standby mode, a processing entity of the line card, referred to herein as a passive or standby processing entity, does not perform, or performs just a subset of, the packet forwarding services performed by the active processing entity of the line card. During normal operation of the network device 700, each of the line cards 706, 708, and 710 has an active processing entity and a standby processing entity. In the embodiment depicted in
In other embodiments, the line cards of network device 700 each comprise only one processing entity, wherein the one processing entity at each line card operates in either the active mode or the standby mode. The line card would operate as an active line card or a standby line card, respectively. For full redundancy, each line card would need a dedicated peer line card to handle failover or switchover. A failover or switchover occurring in an active line card would cause the peer line card to become the active line card, and cause the previously active line card to become the new standby line card. In some embodiments, both a line card and its peer line card may be associated with a common slot identifier, e.g., LC slot 1. This allows the HAML to multicast messages to both the line card and its peer line card using the common slot identifier.
During normal operations, the active processing entities of the network device 700 are configured or configurable to manage the hardware resources of network device 700 and perform a set of networking functions. During this time, the standby processing entities may be passive and may not perform the set of functions performed by the active processing entities. When a failover or switchover occurs at an active processing entity, the standby processing entity for that active processing entity becomes the active processing entity and takes over management of hardware resources and performance of the set of functions related to network device 700 that was previously performed by the processing entity that was previously active and, as a result, the set of functions continues to be performed. The previous active processing entity may then become the standby processing entity and be ready for a subsequent failover or switchover of the new active processing entity. For example, for the embodiment depicted in
A switchover may be caused by various different events, including anticipated or voluntary events. A voluntary or anticipated event is typically a voluntary user-initiated event that is intended to cause the active processing entity to voluntarily yield control to the standby processing entity. An instance of such an event is a command received from a network administrator to perform a switchover. There are various situations when a network administrator may cause a switchover to occur on purpose, such as when software on the management card and line card processing entities is to be upgraded to a newer version. As another example, a switchover may be voluntarily initiated by the system administrator upon noticing performance degradation on the active processing entity or upon noticing that software executed by the processor of the active processing entity is malfunctioning. In these cases, the network administrator may voluntarily issue a command that causes a switchover, with the expectation that problems associated with the current active processing entity will be remedied when the standby processing entity becomes the new active processing entity. A command to cause a switchover may also be initiated as part of scheduled maintenance. Various interfaces, including a command line interface (CLI), may be provided for initiating a voluntary switchover.
A failover may be caused by various different events, including unanticipated or involuntary events. For example, a failover may occur due to some critical failure in the active processing entity, such as a problem with the software executed by the processor of the active processing entity, failure in the operating system loaded by the active processing entity, hardware-related errors on the active processing entity or other router component, and the like.
In one embodiment, network device 700 is able to perform a failover or switchover without interrupting the networking services offered by network device 700. Network device 700 is able to continue providing networking services at line rates without impact (e.g., without experiencing any packet loss) as a result of, or while performing, a failover or switchover.
The network device 700 of
Certain embodiments of the invention may implement a novel transport layer protocol, referred to as the HAML 918 protocol in this disclosure, and depicted in
Out of these layers from the OSI network stack 900, the transport layer 908 provides the functional and procedural means of end-to-end communication services for applications. One well-known transport layer protocol from the OSI network stack 900 is the Transmission Control Protocol (TCP). TCP is a reliable connection-oriented transport service that provides end-to-end reliability, re-sequencing, and flow control.
Embodiments of the invention describe the HAML protocol, an alternate implementation of the transport layer protocol. As shown in
Various embodiments described above can be realized using any combination of dedicated components and/or programmable processors and/or other programmable devices. The various embodiments may be implemented only in hardware, or only in software, or using combinations thereof. For example, the software may be in the form of instructions, programs, etc. stored in a computer-readable memory and may be executed by a processing unit, where the processing unit is a processor, a collection of processors, a core of a processor, a set of cores, etc. In certain embodiments, the various processing described above, including the processing depicted in the flowcharts in
The various processes described herein can be implemented on the same processor or different processors in any combination, with each processor having one or more cores. Accordingly, where components or modules are described as being adapted to, configured to, or configurable to perform a certain operation, such configuration can be accomplished, e.g., by designing electronic circuits to perform the operation, by programming programmable electronic circuits (such as microprocessors) to perform the operation, by providing software or code instructions that are executable by the component or module (e.g., one or more processors) to perform the operation, or any combination thereof. Processes can communicate using a variety of techniques including but not limited to conventional techniques for interprocess communication, and different pairs of processes may use different techniques, or the same pair of processes may use different techniques at different times. Further, while the embodiments described above may make reference to specific hardware and software components, those skilled in the art will appreciate that different combinations of hardware and/or software components may also be used and that particular operations described as being implemented in hardware might also be implemented in software or vice versa.
The various embodiments are not restricted to operation within certain specific data processing environments, but are free to operate within a plurality of data processing environments. Additionally, although embodiments have been described using a particular series of transactions, this is not intended to be limiting.
Thus, although specific invention embodiments have been described, these are not intended to be limiting. Various modifications and equivalents are within the scope of the following claims.
The present application is a non-provisional of and claims the benefit and priority under 35 U.S.C. 119(e) of U.S. Provisional Application No. 61/704,281 filed Sep. 21, 2012, entitled HA APPLICATION MESSAGING LAYER, the entire contents of which are incorporated herein by reference for all purposes. The present application is related to U.S. patent application Ser. No. 13/827,641, filed on Mar. 14, 2013, and entitled ROLE BASED MULTICAST MESSAGING INFRASTRUCTURE, naming Chin et al. (hereinafter “Chin”), the entirety of which is herein incorporated by reference for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
5159592 | Perkins | Oct 1992 | A |
5278986 | Jourdenais et al. | Jan 1994 | A |
5410710 | Sarangdhar et al. | Apr 1995 | A |
5473599 | Li et al. | Dec 1995 | A |
5550816 | Hardwick et al. | Aug 1996 | A |
5550973 | Forman et al. | Aug 1996 | A |
5553230 | Petersen et al. | Sep 1996 | A |
5649110 | Ben-Nun et al. | Jul 1997 | A |
5701502 | Baker et al. | Dec 1997 | A |
5732209 | Vigil et al. | Mar 1998 | A |
5828578 | Blomgren | Oct 1998 | A |
5878232 | Marimuthu | Mar 1999 | A |
5878264 | Ebrahim | Mar 1999 | A |
5970232 | Passint et al. | Oct 1999 | A |
5978578 | Azarya et al. | Nov 1999 | A |
6047330 | Stracke, Jr. | Apr 2000 | A |
6097718 | Bion | Aug 2000 | A |
6101188 | Sekine et al. | Aug 2000 | A |
6104700 | Haddock et al. | Aug 2000 | A |
6111888 | Green et al. | Aug 2000 | A |
6115393 | Engel et al. | Sep 2000 | A |
6119200 | George | Sep 2000 | A |
6161169 | Cheng | Dec 2000 | A |
6233236 | Nelson et al. | May 2001 | B1 |
6269391 | Gillespie | Jul 2001 | B1 |
6282678 | Snay et al. | Aug 2001 | B1 |
6331983 | Haggerty et al. | Dec 2001 | B1 |
6374292 | Srivastava et al. | Apr 2002 | B1 |
6397242 | Devine et al. | May 2002 | B1 |
6424629 | Rubino et al. | Jul 2002 | B1 |
6430609 | Dewhurst et al. | Aug 2002 | B1 |
6442682 | Pothapragada et al. | Aug 2002 | B1 |
6496510 | Tsukakoshi et al. | Dec 2002 | B1 |
6496847 | Bugnion et al. | Dec 2002 | B1 |
6526054 | Li et al. | Feb 2003 | B1 |
6567417 | Kalkunte et al. | May 2003 | B2 |
6570875 | Hegde | May 2003 | B1 |
6577634 | Tsukakoshi et al. | Jun 2003 | B1 |
6580727 | Yim et al. | Jun 2003 | B1 |
6587469 | Bragg | Jul 2003 | B1 |
6597699 | Ayres | Jul 2003 | B1 |
6604146 | Rempe et al. | Aug 2003 | B1 |
6608819 | Mitchem et al. | Aug 2003 | B1 |
6633916 | Kauffman | Oct 2003 | B2 |
6636895 | Li et al. | Oct 2003 | B1 |
6674756 | Rao et al. | Jan 2004 | B1 |
6675218 | Mahler et al. | Jan 2004 | B1 |
6678248 | Haddock et al. | Jan 2004 | B1 |
6680904 | Kaplan et al. | Jan 2004 | B1 |
6683850 | Dunning et al. | Jan 2004 | B1 |
6691146 | Armstrong et al. | Feb 2004 | B1 |
6704925 | Bugnion | Mar 2004 | B1 |
6711357 | Brewer et al. | Mar 2004 | B1 |
6711672 | Agesen | Mar 2004 | B1 |
6725289 | Waldsprurger et al. | Apr 2004 | B1 |
6731601 | Krishna et al. | May 2004 | B1 |
6732220 | Babaian et al. | May 2004 | B2 |
6763023 | Gleeson et al. | Jul 2004 | B1 |
6785886 | Lim et al. | Aug 2004 | B1 |
6789156 | Waldsprurger | Sep 2004 | B1 |
6791980 | Li | Sep 2004 | B1 |
6795966 | Lim et al. | Sep 2004 | B1 |
6847638 | Wu | Jan 2005 | B1 |
6854054 | Kavanagh | Feb 2005 | B1 |
6859438 | Haddock et al. | Feb 2005 | B2 |
6879559 | Blackmon et al. | Apr 2005 | B1 |
6880022 | Waldsprurger et al. | Apr 2005 | B1 |
6894970 | McDermott, III et al. | May 2005 | B1 |
6898189 | Di Benedetto et al. | May 2005 | B1 |
6910148 | Ho et al. | Jun 2005 | B1 |
6938179 | Iyer et al. | Aug 2005 | B2 |
6944699 | Bugnion et al. | Sep 2005 | B1 |
6961806 | Agesen et al. | Nov 2005 | B1 |
6961941 | Nelson et al. | Nov 2005 | B1 |
6975587 | Adamski et al. | Dec 2005 | B1 |
6975639 | Hill et al. | Dec 2005 | B1 |
6983294 | Jones et al. | Jan 2006 | B2 |
7039720 | Alfieri et al. | May 2006 | B2 |
7058010 | Chidambaran et al. | Jun 2006 | B2 |
7061858 | Di Benedetto et al. | Jun 2006 | B1 |
7065059 | Zinin | Jun 2006 | B1 |
7065079 | Patra et al. | Jun 2006 | B1 |
7080283 | Songer et al. | Jul 2006 | B1 |
7093160 | Lau et al. | Aug 2006 | B2 |
7133399 | Brewer et al. | Nov 2006 | B1 |
7188237 | Zhou et al. | Mar 2007 | B2 |
7194652 | Zhou et al. | Mar 2007 | B2 |
7236453 | Visser et al. | Jun 2007 | B2 |
7269133 | Lu et al. | Sep 2007 | B2 |
7284236 | Zhou et al. | Oct 2007 | B2 |
7292535 | Folkes et al. | Nov 2007 | B2 |
7305492 | Bryers et al. | Dec 2007 | B2 |
7308503 | Giraud et al. | Dec 2007 | B2 |
7315552 | Kalkunte et al. | Jan 2008 | B2 |
7317722 | Aquino et al. | Jan 2008 | B2 |
7324500 | Blackmon et al. | Jan 2008 | B1 |
7327671 | Karino et al. | Feb 2008 | B2 |
7339903 | O'Neill | Mar 2008 | B2 |
7360084 | Hardjono | Apr 2008 | B1 |
7362700 | Frick et al. | Apr 2008 | B2 |
7382736 | Mitchem et al. | Jun 2008 | B2 |
7385977 | Wu et al. | Jun 2008 | B2 |
7392424 | Ho et al. | Jun 2008 | B2 |
7404006 | Slaughter et al. | Jul 2008 | B1 |
7406037 | Okita | Jul 2008 | B2 |
7417947 | Marques et al. | Aug 2008 | B1 |
7417990 | Ikeda et al. | Aug 2008 | B2 |
7418439 | Wong | Aug 2008 | B2 |
7424014 | Mattes et al. | Sep 2008 | B2 |
7441017 | Watson et al. | Oct 2008 | B2 |
7444422 | Li | Oct 2008 | B1 |
7447225 | Windisch et al. | Nov 2008 | B2 |
7483370 | Dayal et al. | Jan 2009 | B1 |
7483433 | Simmons et al. | Jan 2009 | B2 |
7487277 | Rinaldi et al. | Feb 2009 | B2 |
7503039 | Inoue et al. | Mar 2009 | B2 |
7518986 | Chadalavada et al. | Apr 2009 | B1 |
7522521 | Bettink et al. | Apr 2009 | B2 |
7529981 | Childress et al. | May 2009 | B2 |
7533254 | Dybsetter et al. | May 2009 | B2 |
7535826 | Cole et al. | May 2009 | B1 |
7549078 | Harvey | Jun 2009 | B2 |
7599284 | Di Benedetto et al. | Oct 2009 | B1 |
7609617 | Appanna et al. | Oct 2009 | B2 |
7613183 | Brewer et al. | Nov 2009 | B1 |
7620953 | Tene et al. | Nov 2009 | B1 |
7631066 | Schatz et al. | Dec 2009 | B1 |
7652982 | Kovummal | Jan 2010 | B1 |
7656409 | Cool et al. | Feb 2010 | B2 |
7664020 | Luss | Feb 2010 | B2 |
7694298 | Goud et al. | Apr 2010 | B2 |
7720066 | Weyman et al. | May 2010 | B2 |
7729296 | Choudhary | Jun 2010 | B1 |
7739360 | Watson et al. | Jun 2010 | B2 |
7751311 | Ramaiah et al. | Jul 2010 | B2 |
7787360 | Windisch et al. | Aug 2010 | B2 |
7787365 | Marques et al. | Aug 2010 | B1 |
7788381 | Watson et al. | Aug 2010 | B2 |
7802073 | Cheng et al. | Sep 2010 | B1 |
7804769 | Tuplur et al. | Sep 2010 | B1 |
7804770 | Ng | Sep 2010 | B2 |
7805516 | Kettler et al. | Sep 2010 | B2 |
7830802 | Huang et al. | Nov 2010 | B2 |
7830895 | Endo et al. | Nov 2010 | B2 |
7843920 | Karino et al. | Nov 2010 | B2 |
7843930 | Mattes et al. | Nov 2010 | B2 |
7873776 | Hetherington et al. | Jan 2011 | B2 |
7886195 | Mayer | Feb 2011 | B2 |
7894334 | Wen et al. | Feb 2011 | B2 |
7898937 | O'Toole et al. | Mar 2011 | B2 |
7929424 | Kochhar et al. | Apr 2011 | B2 |
7940650 | Sandhir et al. | May 2011 | B1 |
7944811 | Windisch et al. | May 2011 | B2 |
7974315 | Yan et al. | Jul 2011 | B2 |
8009671 | Guo et al. | Aug 2011 | B2 |
8014394 | Ram | Sep 2011 | B2 |
8028290 | Rymarczyk et al. | Sep 2011 | B2 |
8040884 | Arunachalam et al. | Oct 2011 | B2 |
8074110 | Vera et al. | Dec 2011 | B2 |
8086906 | Ritz et al. | Dec 2011 | B2 |
8089964 | Lo et al. | Jan 2012 | B2 |
8095691 | Verdoorn, Jr. et al. | Jan 2012 | B2 |
8099625 | Tseng et al. | Jan 2012 | B1 |
8102848 | Rao | Jan 2012 | B1 |
8121025 | Duan et al. | Feb 2012 | B2 |
8131833 | Hadas et al. | Mar 2012 | B2 |
8149691 | Chadalavada et al. | Apr 2012 | B1 |
8156230 | Bakke et al. | Apr 2012 | B2 |
8161260 | Srinivasan | Apr 2012 | B2 |
8180923 | Smith et al. | May 2012 | B2 |
8181174 | Liu | May 2012 | B2 |
8289912 | Huang | Oct 2012 | B2 |
8291430 | Anand et al. | Oct 2012 | B2 |
8335219 | Simmons et al. | Dec 2012 | B2 |
8341625 | Ferris et al. | Dec 2012 | B2 |
8345536 | Rao et al. | Jan 2013 | B1 |
8406125 | Dholakia et al. | Mar 2013 | B2 |
8495418 | Abraham et al. | Jul 2013 | B2 |
8503289 | Dholakia et al. | Aug 2013 | B2 |
8576703 | Dholakia et al. | Nov 2013 | B2 |
8599754 | Li | Dec 2013 | B2 |
8607110 | Peng et al. | Dec 2013 | B1 |
8769155 | Nagappan et al. | Jul 2014 | B2 |
8776050 | Plouffe et al. | Jul 2014 | B2 |
8856590 | Grieco | Oct 2014 | B2 |
9094221 | Dholakia et al. | Jul 2015 | B2 |
9137671 | Fahldieck | Sep 2015 | B2 |
9203690 | Chin et al. | Dec 2015 | B2 |
20020002640 | Barry | Jan 2002 | A1 |
20020013802 | Mori | Jan 2002 | A1 |
20020035641 | Kurose et al. | Mar 2002 | A1 |
20020103921 | Nair et al. | Aug 2002 | A1 |
20020129166 | Baxter et al. | Sep 2002 | A1 |
20020150094 | Cheng et al. | Oct 2002 | A1 |
20030105794 | Jasinschi et al. | Jun 2003 | A1 |
20030202520 | Witkowski et al. | Oct 2003 | A1 |
20040001485 | Frick et al. | Jan 2004 | A1 |
20040030766 | Witkowski | Feb 2004 | A1 |
20040078625 | Rampuria et al. | Apr 2004 | A1 |
20050028028 | Jibbe | Feb 2005 | A1 |
20050036485 | Eilers et al. | Feb 2005 | A1 |
20050055598 | Chen et al. | Mar 2005 | A1 |
20050060356 | Saika | Mar 2005 | A1 |
20050114846 | Banks et al. | May 2005 | A1 |
20050147028 | Na et al. | Jul 2005 | A1 |
20050149633 | Natarajan et al. | Jul 2005 | A1 |
20050213498 | Appanna et al. | Sep 2005 | A1 |
20060002343 | Nain et al. | Jan 2006 | A1 |
20060004942 | Hetherington et al. | Jan 2006 | A1 |
20060018253 | Windisch et al. | Jan 2006 | A1 |
20060018333 | Windisch et al. | Jan 2006 | A1 |
20060090136 | Miller et al. | Apr 2006 | A1 |
20060136913 | Sameske | Jun 2006 | A1 |
20060143617 | Knauerhase et al. | Jun 2006 | A1 |
20060171404 | Nalawade et al. | Aug 2006 | A1 |
20060176804 | Shibata | Aug 2006 | A1 |
20060184938 | Mangold | Aug 2006 | A1 |
20060190766 | Adler et al. | Aug 2006 | A1 |
20060212677 | Fossum | Sep 2006 | A1 |
20060224826 | Arai et al. | Oct 2006 | A1 |
20060274649 | Scholl | Dec 2006 | A1 |
20060294211 | Amato | Dec 2006 | A1 |
20070027976 | Sasame et al. | Feb 2007 | A1 |
20070036178 | Hares et al. | Feb 2007 | A1 |
20070076594 | Khan et al. | Apr 2007 | A1 |
20070083687 | Rinaldi et al. | Apr 2007 | A1 |
20070162565 | Hanselmann | Jul 2007 | A1 |
20070169084 | Frank et al. | Jul 2007 | A1 |
20070174309 | Pettovello | Jul 2007 | A1 |
20070189213 | Karino et al. | Aug 2007 | A1 |
20080022410 | Diehl | Jan 2008 | A1 |
20080068986 | Maranhao et al. | Mar 2008 | A1 |
20080082810 | Cepulis et al. | Apr 2008 | A1 |
20080089238 | Fahmy | Apr 2008 | A1 |
20080120518 | Ritz et al. | May 2008 | A1 |
20080137528 | O'Toole et al. | Jun 2008 | A1 |
20080159325 | Chen et al. | Jul 2008 | A1 |
20080165750 | Kim | Jul 2008 | A1 |
20080189468 | Schmidt et al. | Aug 2008 | A1 |
20080201603 | Ritz et al. | Aug 2008 | A1 |
20080212584 | Breslau et al. | Sep 2008 | A1 |
20080222633 | Kami | Sep 2008 | A1 |
20080225859 | Mitchem | Sep 2008 | A1 |
20080225874 | Lee | Sep 2008 | A1 |
20080243773 | Patel et al. | Oct 2008 | A1 |
20080244222 | Supalov et al. | Oct 2008 | A1 |
20080250266 | Desai et al. | Oct 2008 | A1 |
20080282112 | Bailey et al. | Nov 2008 | A1 |
20090028044 | Windisch et al. | Jan 2009 | A1 |
20090031166 | Kathail et al. | Jan 2009 | A1 |
20090036152 | Janneteau et al. | Feb 2009 | A1 |
20090037585 | Miloushev et al. | Feb 2009 | A1 |
20090040989 | da Costa et al. | Feb 2009 | A1 |
20090049537 | Chen et al. | Feb 2009 | A1 |
20090051492 | Diaz et al. | Feb 2009 | A1 |
20090052412 | Kumar et al. | Feb 2009 | A1 |
20090054045 | Zakrzewski et al. | Feb 2009 | A1 |
20090055831 | Bauman et al. | Feb 2009 | A1 |
20090059888 | Nelson et al. | Mar 2009 | A1 |
20090080428 | Witkowski et al. | Mar 2009 | A1 |
20090086622 | Ng | Apr 2009 | A1 |
20090086748 | Wang et al. | Apr 2009 | A1 |
20090092135 | Simmons et al. | Apr 2009 | A1 |
20090094481 | Vera et al. | Apr 2009 | A1 |
20090106409 | Murata | Apr 2009 | A1 |
20090185506 | Watson et al. | Jul 2009 | A1 |
20090186494 | Bell et al. | Jul 2009 | A1 |
20090193280 | Brooks et al. | Jul 2009 | A1 |
20090198766 | Chen et al. | Aug 2009 | A1 |
20090216863 | Gebhart et al. | Aug 2009 | A1 |
20090245248 | Arberg et al. | Oct 2009 | A1 |
20090219807 | Wang | Nov 2009 | A1 |
20090316573 | Lai | Dec 2009 | A1 |
20100017643 | Baba et al. | Jan 2010 | A1 |
20100039932 | Wen et al. | Feb 2010 | A1 |
20100042715 | Tham et al. | Feb 2010 | A1 |
20100058342 | Machida | Mar 2010 | A1 |
20100064293 | Kang et al. | Mar 2010 | A1 |
20100107162 | Edwards et al. | Apr 2010 | A1 |
20100138208 | Hattori et al. | Jun 2010 | A1 |
20100138830 | Astete et al. | Jun 2010 | A1 |
20100161787 | Jones | Jun 2010 | A1 |
20100169253 | Tan | Jul 2010 | A1 |
20100235662 | Nishtala | Sep 2010 | A1 |
20100257269 | Clark | Oct 2010 | A1 |
20100278091 | Sung et al. | Nov 2010 | A1 |
20100287548 | Zhou et al. | Nov 2010 | A1 |
20100325261 | Radhakrishnan et al. | Dec 2010 | A1 |
20100325381 | Heim | Dec 2010 | A1 |
20100325485 | Kamath et al. | Dec 2010 | A1 |
20110023028 | Nandagopal et al. | Jan 2011 | A1 |
20110029969 | Venkataraja et al. | Feb 2011 | A1 |
20110072327 | Schoppmeier et al. | Mar 2011 | A1 |
20110125894 | Anderson et al. | May 2011 | A1 |
20110125949 | Mudigonda et al. | May 2011 | A1 |
20110126196 | Cheung et al. | May 2011 | A1 |
20110154331 | Ciano et al. | Jun 2011 | A1 |
20110173334 | Shah | Jul 2011 | A1 |
20110228770 | Dholakia et al. | Sep 2011 | A1 |
20110228771 | Dholakia et al. | Sep 2011 | A1 |
20110228772 | Dholakia et al. | Sep 2011 | A1 |
20110228773 | Dholakia et al. | Sep 2011 | A1 |
20110231578 | Nagappan et al. | Sep 2011 | A1 |
20110238792 | Phillips et al. | Sep 2011 | A1 |
20120023309 | Abraham et al. | Jan 2012 | A1 |
20120023319 | Chin et al. | Jan 2012 | A1 |
20120030237 | Tanaka | Feb 2012 | A1 |
20120158995 | McNamee | Jun 2012 | A1 |
20120166764 | Henry et al. | Jun 2012 | A1 |
20120170585 | Mehra | Jul 2012 | A1 |
20120174097 | Levin | Jul 2012 | A1 |
20120230240 | Nebat et al. | Sep 2012 | A1 |
20120264437 | Mukherjee | Oct 2012 | A1 |
20120290869 | Heitz | Nov 2012 | A1 |
20120297236 | Ziskind et al. | Nov 2012 | A1 |
20130003708 | Ko et al. | Jan 2013 | A1 |
20130013905 | Held et al. | Jan 2013 | A1 |
20130067413 | Boss | Mar 2013 | A1 |
20130070766 | Pudiyapura | Mar 2013 | A1 |
20130191532 | Baum et al. | Jul 2013 | A1 |
20130211552 | Gomez et al. | Aug 2013 | A1 |
20130259039 | Dholakia et al. | Oct 2013 | A1 |
20130263117 | Konik et al. | Oct 2013 | A1 |
20130316694 | Yeh et al. | Nov 2013 | A1 |
20140007097 | Chin et al. | Jan 2014 | A1 |
20140029613 | Dholakia et al. | Jan 2014 | A1 |
20140036915 | Dholakia et al. | Feb 2014 | A1 |
20140068103 | Gyambavantha | Mar 2014 | A1 |
20140089484 | Chin et al. | Mar 2014 | A1 |
20140095927 | Abraham et al. | Apr 2014 | A1 |
20140143591 | Chiang et al. | May 2014 | A1 |
20140219095 | Lim et al. | Aug 2014 | A1 |
20150039932 | Kaufmann et al. | Feb 2015 | A1 |
20160092324 | Young et al. | Mar 2016 | A1 |
20160105390 | Bernstein et al. | Apr 2016 | A1 |
20160182241 | Chin et al. | Jun 2016 | A1 |
Number | Date | Country |
---|---|---|
101120317 | Feb 2008 | CN |
0887731 | Dec 1998 | EP |
0926859 | Jun 1999 | EP |
1107511 | Jun 2001 | EP |
1 939 742 | Feb 2008 | EP |
2 084 605 | Aug 2009 | EP |
WO 2008054997 | May 2008 | WO |
WO 2014004312 | Jan 2014 | WO |
Entry |
---|
Final Office Action for U.S. Appl. No. 13/796,136, dated Jun. 17, 2016, 18 pages. |
Non-Final Office Action for U.S. Appl. No. 14/514,253, dated Jul. 13, 2016, 28 pages. |
U.S. Appl. No. 14/923,327, filed Oct. 26, 2015, by Bill Ying Chin (Unpublished.). |
Final Office Action for U.S. Appl. No. 14/050,263 dated Oct. 7, 2015, 9 pages. |
Notice of Allowance for U.S. Appl. No. 12/626,432, dated Oct. 27, 2015, 29 pages. |
Notice of Allowance for U.S. Appl. No. 13/770,751, dated Nov. 16, 2015, 29 pages. |
Notice of Allowance for U.S. Appl. No. 12/626,432, dated Nov. 20, 2015, 4 pages. |
Non-Final Office Action for U.S. Appl. No. 13/796,136, dated Dec. 9, 2015, 16 pages. |
U.S. Appl. No. 14/514,253 filed by Zhou et al. filed Oct. 14, 2014. (Unpublished.). |
“GIGAswitch FDDI System—Manager's Guide,” Part No. EK-GGMGA-MG.B01, Jun. 1993 first printing, Apr. 1995 second printing, Copyright 1995, 113 pages, Digital Equipment Corporation, Maynard, MA. |
“GIGAswitch System—Manager's Guide,” Part No. EK-GGMGA-MG.A01, Jun. 1993, Copyright 1993, 237 pages, Digital Equipment Corporation, Maynard, MA. |
“Brocade ServerIron ADX 1000, 4000, and 8000 Series Frequently Asked Questions,” 10 pages, Copyright 2009, Brocade Communications Systems, Inc. |
Braden et al., “Integrated Services in the Internet Architecture: an Overview,” Jul. 1994, RFC 1633, Network Working Group, pp. 1-28. |
Burke, “Vmware Counters Oracle, Microsoft With Free Update”, Nov. 13, 2007, 2 pages. |
Chen, “New Paradigm in Application Delivery Networking: Advanced Core Operating System (ACOS) and Multi-CPU Architecture—They Key to Achieving Availability, Scalability and Preformance.” White Paper, May 2009, 5 pages, A10 Networks. |
Cisco IP Routing Handbook, Copyright 2000, 24 pages, M&T Books. |
Cisco Systems, Inc., “BGP Support for Nonstop Routing (NSR) with Stateful Switchover (SSO).” Mar. 20, 2006, 18 pages. |
Cisco Systems, Inc., “Graceful Restart, Non Stop Routing and IGP routing protocol timer Manipulation,” Copyright 2008, 4 pages. |
Cisco Systems, Inc., “Intermediate System-to-Intermediate System (IS-IS) Support for Graceful Restart (GR) and Non-Stop Routing (NSR),” Copyright 2008, pp. 1-3. |
Cisco Systems, Inc., “Internet Protocol Multicast,” Internetworking Technologies Handbook, 3rd Edition, Published 2000, Chapter 43, 16 pages. |
Cisco Systems, Inc., “Multicast Quick—Start Configuration Guide,” Document ID:9356, Copyright 2008-2009, 15 pages. |
Cisco Systems, Inc., “Warm Reload,” Cisco IOS Releases 12.3(2)T, 12.2(18)S, and 12.2(27)SBC, Copyright 2003, 14 pages. |
Demers et al., “Analysis and Simulation of a Fair Queueing Algorithm,” Xerox PARC, Copyright 1989, 12 pages, ACM. |
European Search Report for Application No. EP 02254403, dated Mar. 18, 2003, 3 pages. |
European Search Report for Application No. EP 02256444, dated Feb. 23, 2005, 3 pages. |
Extreme v. Enterasys WI Legal Transcript of Stephen R. Haddock, May 7, 2008, vol. 2, 2 pages. |
Fenner, et al., “Protocol Independent Multicast—Sparse Mode (PIM-SM): Protocol Specification (Revised).” Network Working Group, RFC 4601, Aug. 2006, pp. 1-151. |
Floyd et al., “Link-sharing and Resource Management Models for Packet Networks,” IEEE/ACM Transactions on Networking, Aug. 1995, vol. 3, No. 4, Copyright 1995, IEEE, 22 pages. |
Freescale Semiconductor, Inc., “Freescale's Embedded Hypervisor for QorIQ™ P4 Series Communications Platform,” White Paper, Oct. 2008, Copyright 2008, 8 pages, Document No. EMHYPQIQTP4CPWP, Rev. 1. |
Freescale Semiconductor, Inc., “Embedded Multicore: An Introduction,” Jul. 2009, Copyright 2009, 73 pages, Document No. EMBMCRM, Rev. 0. |
Hardwick, “IP Multicast Explained,” Metaswitch Networks, Jun. 2004, 71 pages. |
Hemminger, “Delivering Advanced Application Acceleration & Security,” Application Delivery Challenge, Jul. 2007, 3 pages. |
Intel® Virtualization Technology, Product Brief, “Virtualization 2.0—Moving Beyond Consolidation”, 2008, 4 pages. |
IP Infusion Brochure, “ZebOS® Network Platform: Transporting You to Next Generation Networks,” ip infusion™ An ACCESS Company, Jun. 2008, 6 pages. |
Kaashok et al., “An Efficient Reliable Broadcast Protocol,” Operating System Review, Oct. 4, 1989, 15 pages. |
Kakadia, et al., “Enterprise Network Design Patterns: High Availability” Sun Microsystems, Inc., Sun BluePrints™ Online, Revision A, Nov. 26, 2003, 37 pages, at URL: http://www.sun.com/blueprints. |
Kaplan, “Part 3 in the Reliability Series: NSR™ Non-Stop Routing Technology,” White Paper, Avici Systems, Copyright 2002, 8 pages. |
Keshav, “An Engineering Approach to Computer Networking: ATM Networks; the internet, and the Telephone Network,” Addison-Wesley Professional Computing Series, part 1 of 5, May 15, 1997, Copyright 1997, 148 pages, by AT&T, Addison-Wesley Publishing Company. |
Keshav, “An Engineering Approach to Computer Networking: ATM Networks; the internet, and the Telephone Network,” Addison-Wesley Professional Computing Series, part 2 of 5, May 15, 1997, Copyright 1997, 131 pages, by AT&T, Addison-Wesley Publishing Company. |
Keshav, “An Engineering Approach to Computer Networking: ATM Networks; the internet, and the Telephone Network,” Addison-Wesley Professional Computing Series, part 3 of 5, May 15, 1997, Copyright 1997, 129 pages, by AT&T, Addison-Wesley Publishing Company. |
Keshav, “An Engineering Approach to Computer Networking: ATM Networks; the internet, and the Telephone Network,” Addison-Wesley Professional Computing Series, part 4 of 5, May 15, 1997, Copyright 1997, 130 pages, by AT&T, Addison-Wesley Publishing Company. |
Keshav, “An Engineering Approach to Computer Networking: ATM Networks; the internet, and the Telephone Network,” Addison-Wesley Professional Computing Series, part 5 of 5, May 15, 1997, Copyright 1997, 142 pages, by AT&T, Addison-Wesley Publishing Company. |
Khan, “IP Routing Use Cases,” Cisco Press, Sep. 22, 2009, pp. 1-16, at URL: http://www.ciscopress.com/articles/printerfriendly.asp?p=1395746. |
Lee, et al., “Open Shortest Path First (OSPF) Conformance and Performance Testing,” White Papers, Ixia—Leader in Convergence IP Testing, Copyright 1998-2004, pp. 1-17. |
Manolov, et al., “An Investigation into Multicasting, Proceedings of the 14th Annual Workshop on Architecture and System Design,” (ProRISC2003), Veldhoven, The Netherlands, Nov. 2003, 6 pages. |
May, et al., “An Experimental Implementation of Traffic Control for IP Networks,” 1993, 11 pages, Sophia-Antipolis Cedex, France. |
Moy, “OSPF Version 2,” Network Working Group, RFC 2328, Apr. 1998, 204 pages. |
Order Granting/Denying Request for Ex Parte Reexamination for U.S. Appl. No. 90/010,432, dated May 21, 2009, 18 pages. |
Order Granting/Denying Request for Ex Parte Reexamination for U.S. Appl. No. 90/010,433, dated May 22, 2009, 15 pages. |
Order Granting/Denying Request for Ex Parte Reexamination for U.S. Appl. No. 90/010,434, dated May 22, 2009, 20 pages. |
Pangal, “Core Based Virtualization—Secure, Elastic and Deterministic Computing is Here . . . ,” Blog Posting, May 26, 2009, 1 page, printed on Jul. 13, 2009, at URL: http://community.brocade.com/home/community/brocadeblogs/wingspan/blog/tags/serveri . . . . |
Partridge, “A Proposed Flow Specification,” RFC 1363, Sep. 1992, pp. 1-20, Network Working Group. |
Pepelnjak, et al., “Using Multicast Domains,” informIT, Jun. 27, 2003, pp. 1-29, at URL: http://www.informit.com/articles/printerfriendly.aspx?p=32100. |
Product Category Brochure, “J Series, M Series and MX Series Routers—Juniper Networks Enterprise Routers—New Levels of Performance, Availability, Advanced Routing Features, and Operations Agility for Today's High-Performance Businesses,” Juniper Networks, Nov. 2009, 11 pages. |
Quickspecs, “HP Online VM Migration (for HP Integrity Virtual Machines)”, Wordwide—Version 4, Sep. 27, 2010, 4 pages. |
Riggsbee, “From ADC to Web Security, Serving the Online Community,” Blog Posting, Jul. 8, 2009, 2 pages, printed on Dec. 22, 2009, at URL: http://community.brocade.com/home/community/brocadeblogs/wingspan/blog/2009/07/0 . . . . |
Riggsbee, “You've Been Warned, the Revolution Will Not Be Televised,” Blog Posting, Jul. 9, 2009, 2 pages, printed on Dec. 22, 2009, at URL: http://community.brocade.com/home/community/brocadeblogs/wingspan/blog/2009/07/0 . . . . |
Rodbell, “Protocol Independent Multicast—Sparse Mode,” CommsDesign, Dec. 19, 2009, pp. 1-5, at URL: http://www.commsdesign.com/main/9811/9811standards.htm. |
Schlansker, et al., “High-Performance Ethernet-Based Communications for Future Multi-Core Processors,” SC07 Nov. 10-16, 2007, Copyright 2007, 12 pages, ACM. |
TCP/IP Illustrated, vol. 2: The Implementation, Gray R. Wright and W. Richard Stevens, Addison-Wesley 1995, 23 pages. |
VMware, “Dynamic Balancing and Allocation of Resources for Virtual Machines”, Product Datasheet, Copyright ® 1998-2006, 2 pages. |
VMware, “Live Migration for Virtual Machines Without Service Interruption”, Product Datasheet, Copyright ® 2009 Vmware, Inc., 4 pages. |
VMware, “Resource Management with Vmware DRS”, VMware Infrastructure, Copyright ® 1998-2006, 24 pages. |
VMware., “Automating High Availability (HA) Services With VMware HA”, VMware Infrastructure, Copyright ® 1998-2006, 15 pages. |
Wolf, et al., “Design Issues for High-Performance Active Routers,” IEEE Journal on Selected Areas in Communications, IEEE, Inc. New York, USA, Mar. 2001, vol. 19, No. 3, Copyright 2001, IEEE, 6 pages. |
Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration; International Search Report and Written Opinion of the International Searching Authority for International Application No. PCT/US2013/047105 dated Oct. 29, 2013, 8 pages. |
Non-Final Office Action for U.S. Appl. No. 09/953,714, dated Dec. 21, 2004, 16 pages. |
Final Office Action for U.S. Appl. No. 09/953,714, pages Jun. 28, 2005, 17 pages. |
Non-Final Office Action for U.S. Appl. No. 09/896,228, dated Jul. 29, 2005, 17 pages. |
Non-Final Office Action for U.S. Appl. No. 09/953,714, dated Jan. 26, 2006, 15 pages. |
Final Office Action for U.S. Appl. No. 09/953,714, dated Aug. 17, 2006, 17 pages. |
Non-Final Office Action for U.S. Appl. No. 09/896,228, dated Mar. 5, 2007, 14 pages. |
Final Office Action for U.S. Appl. No. 09/896,228, dated Aug. 21, 2007, 15 pages. |
Non-Final Office Action for U.S. Appl. No. 09/896,228, dated Sep. 7, 2006, 17 pages. |
Notice of Allowance for U.S. Appl. No. 09/896,228, dated Jun. 17, 2008, 20 pages. |
Non-Final Office Action for U.S. Appl. No. 12/210,957, dated Sep. 2, 2009, 16 pages. |
Notice of Allowance for U.S. Appl. No. 09/953,714, dated Sep. 14, 2009, 6 pages. |
Notice of Allowance for U.S. Appl. No. 12/210,957, dated Feb. 4, 2010, 10 pages. |
Non-Final Office Action for U.S. Appl. No. 12/333,029, dated May 27, 2010, 29 pages. |
Non-Final Office Action for U.S. Appl. No. 12/333,029, dated Mar. 30, 2012, 14 pages. |
Non-Final Office Action for U.S. Appl. No. 12/626,432 dated Jul. 12, 2012, 13 pages. |
Non-Final Office Action for U.S. Appl. No. 12/913,572 dated Aug. 3, 2012, 6 pages. |
Non-Final Office Action for U.S. Appl. No. 12/823,073 dated Aug. 6, 2012, 21 pages. |
Notice of Allowance for U.S. Appl. No. 12/333,029 dated Aug. 17, 2012, 5 pages. |
Non-Final Office Action for U.S. Appl. No. 12/913,598 dated Sep. 6, 2012, 10 pages. |
Non-Final Office Action for U.S. Appl. No. 12/913,612 dated Sep. 19, 2012, 11 pages. |
Non-Final Office Action for U.S. Appl. No. 12/913,650 dated Oct. 2, 2012, 9 pages. |
Notice of Allowance for U.S. Appl. No. 12/913,572 dated Nov. 21, 2012, 7 pages. |
Non-Final Office Action for U.S. Appl. No. 12/842,936 dated Nov. 28, 2012, 12 pages. |
Final Office Action for U.S. Appl. No. 12/823,073 dated Jan. 23, 2013, 23 pages. |
Notice of Allowance for U.S. Appl. No. 12/913,598 dated Mar. 12, 2013, 5 pages. |
Notice of Allowance for U.S. Appl. No. 12/913,650 dated Mar. 25, 2013, 6 pages. |
Notice of Allowance for U.S. Appl. No. 12/842,936 dated Apr. 8, 2013, 6 pages. |
Final Office Action for U.S. Appl. No. 12/626,432 dated Apr. 12, 2013, 14 pages. |
Non-Final Office Action for U.S. Appl. No. 12/842,945 dated Jun. 20, 2013, 14 pages. |
Notice of Allowance for U.S. Appl. No. 12/913,598 dated Jul. 9, 2013, 6 pages. |
Advisory Action for U.S. Appl. No. 12/626,432 dated Sep. 25, 2013, 4 pages. |
Non-Final Office Action for U.S. Appl. No. 12/626,432 dated Nov. 21, 2013, 9 pages. |
Notice of Allowance for U.S. Appl. No. 12/823,073 dated Feb. 19, 2014, 8 pages. |
Final Office Action for U.S. Appl. No. 12/842,945 dated Mar. 7, 2014, 13 pages. |
Final Office Action for U.S. Appl. No. 12/626,432 dated Jul. 3, 2014, 12 pages. |
Non-Final Office Action for U.S. Appl. No. 13/925,696 dated Aug. 27, 2014, 8 pages. |
Non-Final Office Action for U.S. Appl. No. 13/796,136 dated Sep. 8, 2014, 19 pages. |
Non-Final Office Action for U.S. Appl. No. 12/842,945 dated Sep. 17, 2014, 7 pages. |
Notice of Allowance for U.S. Appl. No. 13/925,696 dated Jan. 7, 2015, 6 pages. |
Non-Final Office Action for U.S. Appl. No. 12/626,432 dated Jan. 15, 2015, 13 pages. |
Non-Final Office Action for U.S. Appl. No. 13/827,641 dated Feb. 18, 2015, 7 pages. |
Non-Final Office Action for U.S. Appl. No. 13/770,751 dated Feb. 24, 2015, 10 pages. |
Notice of Allowance for U.S. Appl. No. 13/925,723 dated Mar. 17, 2015, 5 pages. |
Non-Final Office Action for U.S. Appl. No. 13/840,540 dated Mar. 23, 2015, 14 pages. |
Final Office Action for U.S. Appl. No. 13/796,136 dated Mar. 27, 2015, 17 pages. |
Non-Final Office Action for U.S. Appl. No. 13/621,138 dated Aug. 22, 2014, 6 pages. |
Notice of Allowance for U.S. Appl. No. 13/621,138 dated May 11, 2015, 5 pages. |
Non-Final Office Action for U.S. Appl. No. 14/050,263, dated Apr. 23, 2015, 5 pages. |
Notice of Allowance for U.S. Appl. No. 12/842,945, dated Apr. 8, 2015, 9 pages. |
Notice of Allowance for U.S. Appl. No. 13/621,138, dated Jul. 17, 2015, 5 pages. |
Notice of Allowance and Fees dated Dec. 2, 2016 for U.S. Appl. No. 14/514,253, 9 pages. |
Non-Final Office Action dated Jun. 29, 2017 for U.S. Appl. No. 14/923,327, 10 pages. |
First Office Action dated Jul. 3, 2017 for Chinese Application No. 201380039803.1, 13 pages. |
Examination Report dated Jul. 6, 2017 for European Application No. 13737020.1, 8 pages. |
Final Office Action dated Jul. 28, 2017 for U.S. Appl. No. 13/796,136, 22 pages. |
Final Office Action for U.S. Appl. No. 14/923,327, dated Nov. 2, 2017, 10 pages. |
Notice of Allowance for U.S. Appl. No. 14/923,327, dated Jan. 5, 2018, 8 pages. |
Non-Final Office Action for U.S. Appl. No. 13/796,136, dated Jan. 11, 2018, 21 pages. |
Number | Date | Country | |
---|---|---|---|
20140089425 A1 | Mar 2014 | US |
Number | Date | Country | |
---|---|---|---|
61704281 | Sep 2012 | US |