This specification relates to implementations of the Stream Control Transmission Protocol.
Stream Control Transmission Protocol (SCTP) is a transport layer serving a role similar to that of Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). SCTP provides some of the same service features of both: it is message-oriented like UDP and ensures reliable, in-sequence transport of messages with congestion control like TCP. It is possible to tunnel SCTP over UDP, as well as mapping TCP API (application programming interface) calls to SCTP calls. RFC4960 is a specification for SCTP, Stewart, R., Ed., “Stream Control Transmission Protocol”, RFC 4960, DOI 10.17487/RFC4960, September 2007 (http://www.rfc-editor.org/info/rfc4960).
SCTP is layered over Internet Protocol (IP) and allows for multiple unidirectional data streams between connected endpoints. The individual streams can go in either direction, effectively providing bi-directional communication. The endpoints themselves may use multiple IP addresses in support of multiple data paths for the same logical SCTP connection. Data on any particular stream is delivered to the application layer in units referred to as messages, which are numbered by a stream sequence number. “Chunks” in SCTP packets carry the messages; the chunks are numbered sequentially using a transmission sequence number (TSN) that increases independently of which stream a chunk carries data for. An SCTP packet will generally carry multiple and different kinds of chunks. The possible chunk types include DATA chunks, which carry payload data. Chunks are a protocol concept not seen by applications, which read messages from and write messages to the SCTP stack. Like TCP/IP, there are acknowledgments sent that indicate data chunk reception, these are called selective acknowledgments or SACKs; and data chunks deemed to be lost are retransmitted. A few of the key parameters that capture the protocol state for data flow are the TSN, stream ID, stream sequence number, and various SACK fields.
SCTP additionally defines control messages and state machines both to establish and to cleanly teardown connections.
This specification describes technologies for implementing a system that includes data processing nodes that communicate using SCTP and possibly other protocols. A node is a physical computing device, e.g., a computer, or a virtual computing device running on a physical computing device, with one or more processors that can execute computer program instructions and memory for storing such instructions and data.
One use case, which will be the basis of most of the description in this specification, is a resilient implementation in an LTE Home eNodeB Gateway (HeNB-GW or HGW). The underlying context for this use case is the network architecture of a Long Term Evolution (LTE) system. The LTE architecture and its components and operation are described, for example, in the ETSI TS 136 300 v12.6.0 Release 12 (2015-07) Technical Specification, ©European Telecommunications Standards Institute (ETSI) 2015 (“ETSI LTE”), the disclosure of which is incorporated herein by reference.
The resilient HGW is resilient in the sense that if an active HGW instance suddenly ceases operation, for whatever reason, a new HGW instance can replace it without requiring the reset or reconnection of key control connections that had been established between external entities and the original HGW. It is important to insure that established connections are resilient because of the much greater cost incurred in resetting a connection on which data is already flowing, or an aggregation of connections coming through an SCTP channel, as opposed to restarting a failed connection attempt. For this reason, the resiliency for connected SCTP endpoints specifically is important.
This resiliency is achieved by an implementation of an SCTP stack that includes checkpoints, which will be referred to as a resilient SCTP stack. A resilient SCTP stack checkpoints key protocol state between a master and a slave at specific points, as chunks and messages flow through the network stack. Avoiding the overhead of maintaining the slave with state identical to that of the master at every instant, in the resilient SCTP stack, state is strategically checkpointed such that using the checkpointed state as a starting point, a replacement stack can be constructed at the slave, which, although not identical to the master, can continue without interruption from any failover point in a protocol-compliant manner. While the exact exchange of packets from a particular failover time will likely differ between those the original master stack would have generated, the protocol is capable of naturally adapting to these differences. For example, a newly promoted slave endpoint may perform additional retransmissions, but these would be in scope of retransmissions the SCTP protocol is designed to produce when data chunks are lost.
The innovative aspects of the subject matter described in this specification can be embodied in methods, computer programs on non-transitory media, and computer systems of one or more computers in one or more locations that are programmed with instructions that, when executed by the one or more computers, cause them to perform operations described in this specification. Programs and systems may be described in this specification as being “configured” to perform certain actions or processes. For a system of one or more computers to be configured to perform particular operations or actions means that the system has installed on it software, firmware, hardware, or a combination of them that in operation cause the system to perform the operations or actions. For one or more computer programs to be configured to perform particular operations or actions means that the one or more programs include instructions that, when executed by data processing apparatus, cause the apparatus to perform the operations or actions.
The subject matter described in this specification can be implemented in particular embodiments so as to realize one or more of the following advantages. With a resilient SCTP stack as described in this specification, failover does not result in message loss, nor does failover result in duplicate message delivery to the application. With such an implementation of SCTP, checkpointing that results in data payload copying is minimized; for example, data moving within the stack from queue-to-queue is not checkpointed at every transition. In addition, the implementations of a resilient SCTP stack described in this specification are interoperable with existing SCTP implementations; the protocol specification is not violated, and it can be implemented so as not to deviate from timing assumptions made by industry standard implementations. Platforms interconnected with resilient implementations of SCTP control protocols can be grown across many generations of hardware with predictable scaling and near 100% availability. When critical network functionality is implemented on commodity servers, the resiliency designed into the SCTP protocol is insufficient. In contrast, the resilient implementations described in this specification provide resilient, non-disruptive failover of network functionality from one server or one rack to another. The SCTP protocol was designed for resiliency in use cases where failover is limited to a single appliance providing network functionality, and the failover is due to a single component failure such as a network adaptor. In contrast, the resilient implementations described in this specification apply to a data-center model of providing network functionality.
The details of one or more embodiments of the subject matter of this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.
Like reference numbers and designations in the various drawings indicate like elements.
All the checkpointing on the transmission path is to a slave for the transmitting, resilient SCTP stack 106 of the application node, which is the master. The slave is a standby node which is configured with an implementation of the resilient SCTP stack and which may be further configured to receive and archive checkpoint data from the master. Alternatively to storing the checkpoint data in the slave stack, the checkpoint data may be stored on storage local to the slave node. The slave node will generally be on a different server and advantageously in a different rack in a datacenter than the master node. The different rack will advantageously provide the slave node with one or more of a power supply, a source of power, or a network connection that is different from that used by the master node. The checkpointing operations archive the checkpointed data in case the data needs to be retransmitted.
The actual message payload is first checkpointed by the application, or by a wrapper on the SCTP stack send operation, and then pushed 104 to the SCTP protocol engine. The data chunks composing the message are built 114 and sent 116, and following this the stream ID, stream sequence number, and TSN associated with the message chunks are checkpointed 118. When a SACK for a chunk is received from the peer, the application's SCTP stack 106 deletes its local copy of the chunk and checkpoints the deletion 122.
The application node's resilient SCTP stack 106 is receiving a message for the application. The message payload, stream ID, stream sequence number and TSN of each DATA chunk of the message are checkpointed 204 by the stack 106 to its slave after the DATA chunk is received 202 and before the stack 106 delivers 206 the entire message to the receiving application. After checkpointing 204 the receipt of a chunk 202, the stack 106 sends 208 a SACK to the peer indicating that the chunk has been received, since the slave now also has the received data. Finally, the stack 106 delivers 206 the message to the application 210 when all DATA chunks of the message have been received. The stack 106 checkpoints 212 the delivery of the message, deletes its local copies of the associated DATA chunks, and checkpoints 212 the deletion of the associated DATA chunks.
The resilient SCTP stack implementation is preferably done in user-space, because performing the checkpointing operations within a kernel-space would be more difficult, and in addition, working in user-space provides greater freedom in coupling the SCTP stack to critical applications.
Illustrated is a single Mobility Management Entity (MME) 302 in the Evolved Packet Core (EPC) 300 of an LTE implementation. The EPC will have other elements, including, generally, multiple MMEs. An MME is responsible for keeping track of all user equipment, in particular, handsets. The breaking of a conventional SCTP connection to the MME would mean all of the services through the connection would have to reattach. The resilient failover provided by the resilient SCTP stack prevents this.
Outside of the EPC is a gateway cluster infrastructure 310, that may be implemented on datacenter equipment on which are deployed, i.a., multiple LTE Home eNodeB Gateways (HeNB-GWs or HGWs) 312. For each HGW that is designated as a master, another HGW is designated as its slave 314. Which is the master and which the slave is determined by a distributed configuration service 316, which may be implemented using Apache ZooKeeper, a software project of the Apache Software Foundation. Apache ZooKeeper, ZooKeeper, and Apache are trademarks of The Apache Software Foundation.
The distributed configuration service 316 is used to assign a lock between two nodes that designates one of them as the master. The service also synchronizes actions between cooperating nodes. The service is preferably implemented using an ensemble of ZooKeeper servers, which appear to the HGWs as one service. When a currently-designated master HGW 312 fails, the slave HGW 314 learns from the service that it, the slave, has been promoted and is now that master. The newly promoted master or some other entity creates a new instance of HGW or designates an existing instance to be the new slave.
In some implementations, this election of a master and creation of a new instance are done as follows. A scheduler process is configured, e.g., by a configuration file, to have a predetermined number, e.g., three or five, HGWs running at a time. When an HGW instance terminates, the scheduler processor launches another instance. The HGW instances coordinate with each other using Zookeeper, which provides a name space of data registers called znodes. The instances use the znodes to store their configuration information, including the configuration information specifying where message payloads should be checkpointed. This information is available to the application. The instances also use a Zookeeper recipe for leader election, e.g., as described in http://zookeeper.apache.org/doc/current/recipes.html.
The MME 302 communicates with the HGW 312; in particular, it sees only whichever one of the master-slave pair is currently the master. It communicates with the HGW 312 over an S1-MME control plane interface. The S1-MME interface stack includes an SCTP layer and the MME 302 communicates with the HGW 312 through a separate SCTP connection 318 to the resilient SCTP stack 316 in the HGW 312.
Similarly, each of multiple HeNBs 320a, 320b, . . . 320n communicate with the HGW 312 through their own separate connections to the resilient SCTP stack 316. Each HeNB is a Home evolved Node B, described in the ETSI LTE standard, cited earlier. HeNBs are small cells deployed outside the datacenter and are part of an LTE radio access network (RAN) 350 that communicate directly with mobile handsets.
The MME 302 and the HeNBs 320a . . . 320n implement a conventional SCTP stack.
The infrastructure advantageously includes an IP forwarder (IPFW) 322 between the master and slave HGW, on the one hand, and the HeNBs attached to the master HGW 312, on the other hand. The IPFW 322 makes the connections to the HGW 312 or the HGW 314 look the same whether the connection is to the master or slave, by maintaining a consistent IP address. The IPFW 322 thus makes a failover from master 312 to slave 314 appear transparent to the HeNBs. Advantageously, an IPFW 324 also sits between the MME 302 and master/slave HGW 312/314 for the same purpose. The IPFWs learn of the failover from master to slave HGW from the distributed configuration service 316. With this architecture, on failover of a master HGW to a slave, the handover of HeNBs from former master to former slave HGW can be accomplished without involving the EPC.
In some implementations, the IPFW implements a “distributed IP” address (DIP). A virtual MAC address is used on an externally facing interface on the IPFW, and Address Resolution Protocol (ARP) requests to the DIP are responded to by the IPFW. Each IPFW maintains a database of backend servers, and in particular a record of which servers are acting as master, utilizing a distributed storage infrastructure designed for this purpose, e.g., the deployment of Apache ZooKeeper. Incoming packets first arrive at the IPFW and are forwarded by the IPFW to the machine with the resilient SCTP stack. For the return path, the one carrying responses from the machine with the SCTP stack, packets go directly to the originator, bypassing the IPFW, and have the DIP as the source address. This same process is also used when the backend is the originator.
The SCTP master and slave, e.g., the HeNB-GW master 312 and slave 314 are a pair of such backend servers. Alternatively, the master and slave can simply perform the same virtual MAC operations and do not necessarily require a forwarder in the path, but the forwarder can additionally provide other valuable services, for example, load-balancing.
High-level checkpointing facilities 420 provide for connectivity between the master 422 and slave 424. The master has operational checkpointed objects 408. The creation and destruction of checkpointed objects is recorded by the high-level checkpointing facilities as checkpointed state changes at the master. In addition, as checkpointed state is modified due to stack operation at the master, the checkpointing facilities of the checkpointed objects record the changes. At particular instances, the high-level checkpointing facilities 420 of the master explicitly commits updates containing these changes by sending the updates to the slave. To guarantee consistency, the master, or at least the thread performing the checkpoint update, pauses until the checkpoint update operation completes.
At the slave 424, as checkpoint updates are received, objects in use by the master come and go, i.e., are created and held in a list at the slave until later deletion, as they are created and deleted at the master. The slave representation of each object through this process only contains the checkpointed state. It is during the process of promoting a slave to master that a non-checkpointed state, i.e., a full state, is created. This promotion process will now be described.
In the second phase, the process causes the non-checkpointed state to be set to reasonable values given the values of the checkpointed state in a way that takes into account cross-references between checkpointed objects 510. For every one of the checkpointed objects whose custom recovery function was called in the first stage, a second custom recovery function is called. The second custom recovery function is specific for each object type, unlike the generic implementation of the custom recovery function, and this second custom recovery function may assume all checkpointed objects it references have had the first recovery function called. The second recovery function is coded to operate like an object constructor having been called with enough arguments to construct the various objects it manages; however, rather than obtaining input parameters and state through arguments, that state is obtained from the checkpointed data already constructed on the object and the other checkpointed objects it references. For example, an object that manages the data-sending path may contain both checkpointed and non-checkpointed queues. At this stage, the non-checkpointed queues and the non-checkpointed data held within the object can be synthesized based on data in various cross-referenced checkpointed objects.
After the second phase, all the checkpointed objects that were operational at the master at the time of failover are present at the slave.
In the third phase, the application on the node being promoted calls additional functions that use the set of recovered, checkpointed objects to create the additional state required to enable the objects to work together as part of an application 520. These functions are called by the application as it prepares to become the master. These additional functions are part of the generic SCTP implementation, and the application using the stack calls these functions as part of the process of being promoted. This additional state in large part requires creating operating-system state. For example, any required threads are created at this point 522, and also any required network facilities, e.g., sockets used to connect network peers, are created 524.
At this end of the promotion process, the slave has a fully functional and running SCTP stack. While it may not be completely identical to that of the previous master, it is capable of continuing the SCTP connections without appearance of interruption.
To begin, the App Binding is the application entry point to the SCTP stack. The application may have more than one thread on which data send requests are made, which may be referred to as application threads, and an arrow emanating from this box represents each thread. Along each arrow, i.e., for each thread, a checkpointed StreamMsg object is created to capture the application data send request. This object contains the actual data to be sent, the association to send it on, and the SCTP stream number on which the data will be delivered. The association to send it on is also a checkpointed object; it is not shown in the diagram.
The StreamMsg is pushed onto a checkpointed FIFO queue that provides a bridge between the application thread or threads and the SCTP stack send thread. Before the “App Binding” function call that pushes the StreamMsg, i.e., that calls the FIFO's Push function, returns, a checkpoint commit sends the fact that the push operation occurred, as well as the actual data in the StreamMsg, to the slave. This occurs prior to further processing to insure that if the master fails after the push function returns, the data is not lost, i.e., the slave can be promoted and take over sending the data. This is the only time that the actual message data is checkpointed to the slave.
The processing within the box labeled “loop” represents the SCTP stack's single send thread. To begin the loop, all SACK chunks received from the SCTP peer are processed. The SACK chunks themselves arrive from a receiving thread, see
The next processing to occur is that timers for the protocol are processed in the Process Timers block. Timer events are stored on a Timer Event Queue, and both the timer event and the queue are not checkpointed. Timer events include events such as data resends and heartbeat messages. The timers do not need to be checkpointed, because they can be reset to reasonable values when a slave is promoted to master without causing data or connection loss.
Next, the StreamMsg from the FIFO is popped. The operation itself is checkpointed on the FIFO, and if there are no messages to pop, the loop returns to start over at “A” in the figure. After the StreamMsg has been popped, it is used by the Build Message block to build a checkpointed ChunkMsg. Ownership of the StreamMsg data has now been transferred to the ChunkMsg to avoid duplicate data checkpointing, and the ChunkMsg now contains SCTP parameters relating to sending the message as chunks, such as the TSN. The ChunkMsg is placed on the Pending ChunkMsg Queue, where it will be held until it is acknowledged by the SCTP peer, at which time it may be deleted. SCTP message fragmentation on the send path is realized by having the StreamMsg result in a sequence of ChunkMsgs, if need be.
Finally, the Send operation prepares a non-checkpointed SCTP packet with the chunk for sending on the Network Transport, which sends it to the SCTP peer. At the end of the Send operation, once the network transport has been initiated, all checkpointed state that has changed during this pass through the loop is committed to the slave.
The receiving thread loop begins by waiting in the Network Transport for the arrival of an SCTP packet that contains DATA chunks. Once DATA chunks are available for processing, a checkpointed ChunkMsg is created by the Data Chunk Parser to hold the chunks. This is the only point at which the actual data checkpointing occurs. The resulting ChunkMsg is pushed to the checkpointed Pending ChunkMsg Queue.
Processing continues in a Build Message process, which analyzes the Pending ChunkMsg Queue to determine whether any chunks are ready to be delivered to the application, i.e., whether the SCTP message with the next stream sequence number can be formed. This queue allows for handling out-of-order reception and fragmentation. All chunks forming an SCTP message are popped, and ownership of their data is transferred to the output StreamMsg, which will be used to deliver the SCTP message to the application.
After Build Message pushes the StreamMsg to the FIFO, which bridges the receive and application threads, the receiving thread spawns a checkpoint commit operation. The thread then waits for this checkpoint to complete before releasing the StreamMsg to the application and generating the SACK. The release of the StreamMsg signals to the application thread that data is available to pop. In some implementations, the pop call of the application thread will block, assuming nothing on the FIFO has been released already, until the receiving thread makes a release call on the FIFO. It is important to wait for the commit to complete, since otherwise: (i) the thread could end up delivering the same SCTP message multiple times if failover occurs at inopportune times; and (ii) the thread could SACK the chunk, which implies it will never be resent, yet failover not having checkpointed the data, the chunk would be lost forever.
After generating and sending the SACK chunk, the receiving thread again awaits the next SCTP packet containing data chunks to arrive.
Alternatively, the receive side can be implemented with multiple receiving threads that each push messages to the FIFO. In such implementations, the FIFO operates the same way as has been described for the send side, where multiple application threads push messages to the FIFO to be sent.
In both
A FIFO object, illustrated in
The checkpointed FIFO in the above description has thread B's writer, because it was instantiated with thread B's writer. Every checkpointed object is assigned a writer when the object is instantiated. In addition, thread B's pop operation on the FIFO can be initiated before object O is released by the release operation, in which case thread B will wait for a predetermined amount of time on the release operation being executed by thread A. If this amount of time elapses before a release operation is executed, the pop operation returns without having retrieved any objects placed in the FIFO by thread A. Upon a slave being promoted to master, there is an implicit release call for all objects held by the FIFO at the slave.
The importance of the checkpointed FIFO object for passing objects between threads can be seen in the following example sequence of events in the case of only using a single writer between the application and data sending threads, without using the checkpointed FIFO semantics. Object O is pushed to a regular non-checkpointed FIFO by thread A, and a commit operation is performed for O using thread A. Thread B performs a pop, and at some point after the pop Thread B initiates a commit. The scheduling of threads A and B happens to result in thread B actually committing to the slave before thread A does, and the master happens to crash before the commit of A ever reaches the slave.
In that scenario, on the slave being promoted, due to the missing checkpoint of the commit of thread A, the application side of the FIFO has no record of ever sending the message, so will send the message again. However, the message send has been recorded by thread B; so on the slave being promoted, the message is in the queue and will be resent. The end result is that the message will be sent in duplicate, i.e., the same SCTP message data will be sent in multiple SCTP DATA chunks, each with a different TSN, which is a violation of the SCTP protocol stack API.
Similarly for the data receive path, a message could be delivered in duplicate to the application. In addition to these issues, it is also possible to have sent different messages using the same DATA chunk TSN, which would effectively cause message loss on the send path.
The design and the use of threading depicted in
Optionally, even more threads could be used in the implementation, especially for the data receive path; however, this would lead to a much more complicated design that would be very difficult to thoroughly validate and test. Further, the design described above for data send and receive is the most straightforward when the checkpointed FIFOs are used in a manner where their release call is not made until confirmation from the slave that the checkpoint has completed. This has a relatively small impact on performance for the common case when the network used for writing checkpoints, which is usually intra-cluster, provides much higher performance than that of the network connecting the SCTP peers. Using a single thread for receive, and having the application side of the FIFO also use a single thread for the data send path, enables further optimization that allows the pushing thread not to wait for a commit acknowledgement from the slave before calling the FIFO release function.
Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non-transitory storage medium for execution by, or to control the operation of, data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them. Alternatively or in addition, the program instructions can be encoded on an artificially-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus.
The term “data processing apparatus” refers to data processing hardware and encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can also be, or further include, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can optionally include, in addition to hardware, code that creates an execution environment for computer programs, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
A computer program, which may also be referred to or described as a program, software, a software application, an app, a module, a software module, a script, or code, can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages; and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub-programs, or portions of code. A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a data communication network.
The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by special purpose logic circuitry, e.g., an FPGA or an ASIC, or by a combination of special purpose logic circuitry and one or more programmed computers.
Computers suitable for the execution of a computer program can be based on general or special purpose microprocessors or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data. The central processing unit and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device, e.g., a universal serial bus (USB) flash drive, to name just a few.
Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's device in response to requests received from the web browser.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially be claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Although described in the context on an LTE implementation, the resilient SCTP technology is much more widely applicable, and would be a key component for any data-center application requiring a resilient SCTP network stack.
This application claims the benefit under 35 U.S.C. §119(e) of the filing date of U.S. Patent Application No. 62/296,519, for Resilient Implementation Of Stream Control Transmission Protocol, which was filed on Feb. 17, 2016, and which is incorporated here by reference.
Number | Date | Country | |
---|---|---|---|
62296519 | Feb 2016 | US |