This invention relates to signaling congestion control in connection-oriented networks and more particularly to controlling network congestion due to a high number of network-attached devices demanding simultaneously access to a common network-attached resource.
Modern data terminal equipments (DTEs) attached to connection-oriented networks have a very high processing capacity and are consequently very demanding in terms of connections to the network. In order to establish a connection, a data terminal equipment must exchange a signaling protocol message with the network, commonly called “call setup message”. Network access switching nodes can support a large number of attached-to devices and therefore may process hundreds of call setup messages simultaneously. In large networks nodes may even process thousands of such call setup messages in a very short time.
In some situations, a burst of simultaneous call setup messages may flow into a network causing congestion. For example, when a common resource such as a file server goes down all attached-to devices will attempt to reconnect simultaneously. This congestion may take place either at a network input access node that supports numerous devices requesting the common resource or at the output access node where the common resource attaches to the network.
Such a situation arises in a LAN (local area network) environment when based on a protocol such as ATM (asynchronous transfer mode). Most of the existing ATM LANs rely on the emulation of well-known higher layer LAN protocols such as Ethernet, Token-Ring or Internet Protocol (IP), thereby creating a virtual LAN over the ATM layer. This LAN emulation is enabled by using dedicated protocols, the most widespread being the so-called “Classical IP over ATM” protocol and the so-called “LAN Emulation over ATM” protocol. In each case, a protocol server is required to manage the virtual LAN over the ATM layer, and consequently, any terminal device that wants to enter the virtual LAN must connect to this protocol server prior to proceeding with any other activity such as data transmission. Thus, signaling congestion may occur when too many data terminal equipments (DTEs) try to connect simultaneously to the protocol server.
Several approaches can be used to address this type of signaling congestion problem. One approach consists in “just doing nothing”, that is, let the network recover from congestion by itself. When bursts of call setup messages are received, many of them are rejected. The rejected devices will retry to connect, and hopefully the connection requests will desynchronize with over time so as not to create a congestion state again. Unfortunately, there is no guarantee that the connection requests will desynchronize. Also the time interval for the requests to desynchronize is not determinable. Furthermore, this approach is not scalable in that if more devices share the same resource, the congestion will worsen. Therefore, this approach is not satisfactory in the context of a high speed/performance network.
Another approach is to increase the processing power of the switching nodes. This would be acceptable if networks were static and not continuing to increase in size and utilization. Networks grow faster than the processing power of the switching nodes which makes this approach a short term solution. Therefore, unsatisfactory.
Still another approach is to implement random timers in the terminal devices to manage the retry procedure. The random timers can induce the desynchronizing of all the source devices requesting connections, and therefore, they can naturally pace the call Setup messages. This approach is similar to the so-called Ethernet Backoff Timer method which is commonly implemented to solve access collision problems. However, it has the disadvantage that it depends on changes to the devices that connect to the network. This makes implementing difficult given multi-vendor multiproduct nature of most devices attaching to a network. Furthermore, no standard appears to exists that requires the terminal devices to implement such mechanisms. Finally, it is not desirable to rely on the behavior of unknown devices to protect a switching node and the network from call setup congestion.
Yet a further approach consists in limiting the number of call setup messages in the switching node in order to protect it against an overflow of such call setup messages. This solution is not very efficient since it induces a random discarding of the pending call setup requests. This can be prejudical for example, when, as it may happen, a group of connections has to be established in such a way that, if only one connection from the group fails, then all the group is torn down and the whole group must be reestablished. For instance, this is the case for the control connections in LAN Emulation. Furthermore, this technique is not fair as all the users are penalized while only a few of them may have caused the congestion.
Therefore, there is a need for a solution to the above problems of the prior art that provides efficient protection to network devices from call setup overflow, while assuring the scalability of the networks. Such a solution is provided by the present invention as described hereinafter.
An object of the present invention is to provide a method for keeping control of concurrent connections setup requests in a switching node of a connection-oriented network to provide efficient protection against signaling congestion.
Another object of the invention is to provide a switching node of a connection-oriented network with a system to protect efficiently against call setup message overflow.
In accordance with the appended set of claims, these objects are achieved by providing a method and a system to prevent signaling congestion in a connection-oriented network node in situations where a plurality of network-attached data terminal equipments (source DTEs or source devices) concurrently request a connection to at least one network-attached data terminal equipment (destination DTE or destination device), each of the source DTEs sending call setup messages (CSMs) through the network node to the at least one destination DTE. The CSMs are processed by the network to establish the request connections. The method comprises the steps of: predefining a threshold number (Max) as the maximum number allowed of CSMs from the source DTEs that are actually being processed by the network at a given instant, and predefining a time frame (Window) as the time frame within which no more than Max CSMs are accepted by the network for being processed; detecting each new incoming CSM in the network node; rejecting each new incoming CSM if a number of CSMs equal to Max are already being processed by the network, or if less than Max CSMs are actually being processed by the network, while Max CSMs have already been accepted to be processed during current Window, accepting each new incoming CSM otherwise. The step of detecting each new incoming CSM is optionally followed by a further step of filtering each new incoming CSM to determine whether the CSM satisfies or not at least one predefined filtering criterion, and accepting the incoming CSM if it does not satisfy any of said at least one predefined criterion, or proceeding further with the following steps of the method otherwise.
An embodiment of the invention will now be described, by way of example only, with reference to the accompanying drawings, wherein:
The call pacing system of the present invention is embodied in an ATM high speed network and more particularly, it is implemented at network node level within the Call Admission Control (CAC) module, that is, the module that determines whether a call can is be accepted. Accordingly, the call pacing system of the present invention can be considered as a CAC extension. Furthermore, the call pacing system may be implemented in the access nodes of the network where the data terminal equipments or devices attach or in every node of the network (access or intermediate node). In the preferred embodiment, the call pacing system is implemented in the access nodes in the form of a software system.
The preferred embodiment of the present invention contains one or more software systems or software components or functions. In this context, a software system is a collection of one or more executable software programs, and one or more storage areas (for example, RAM, ROM, cache, disk, flash memory, PCMCIA, CD-ROM, Server's Memory, ftp accessible memory, etc.). In general terms, a software system should be understood to comprise a fully functional software embodiment of a function or collection of functions, which can be added to an existing processing system to provide new function to that processing system. A software system is thus understood to be a software implementation of a function which can be carried out in a processor system providing new functionality. It should be understood in the context of the present invention that delineations between software systems are representative of the preferred implementation. However, the present invention may be implemented using any combination or separation of software or hardware systems. Software systems may be distributed on a computer usable medium such as floppy disk, diskettes, CD-ROM, PCMCIA cards, flash memory cards and/or any other computer or processor usable medium. Note that the software system may also be downloaded to a processor via a communications network or from an Internet node accessible via a communications adapter.
Referring to
Call pacing of the present invention makes use of a “thresholding” technique and a “windowing” technique as explained hereafter. A counter CNT is dedicated to count the call setup messages (in packet or cell format) which arrive in the switching node. A predefined number Max called “call setup threshold” defines a maximum allowed number of concurrent connection requests (i.e., call setup messages) that are actually being processed by the network at a given time. Furthermore, a time frame called “Window” is defined as the time frame within which no more than Max call setup messages are allowed to be accepted by call pacing for processing, even if less than Max concurrent connections are actually being set up. Indeed, when a connection is set up after the corresponding call setup message has been processed by the network, the destination data terminal equipment (DTE) sends an acknowledgment message (in the form of packets or cells) through the switching node to the source DTE which requested the connection. These acknowledgment messages are used to decrement counter CNT.
Additionally, the present invention may use a “call pacing filter” to select the call setup messages to be paced. The filtering performed is dependent on the particular implementation of the present invention and the nature of the network. For example, call setup messages may be filtered according to the destination DTE address as in one preferred embodiment, or according to characteristics of the connection requested, as the type of traffic (e.g. CBR—constant bit rate, VBR—variable bit rate) or as the associated QoS (quality of service). If there is more than one call pacing point in the switching node, the type of filter may be different for each call pacing point. It should be noted that, while in the preferred embodiment of the present invention a filter has been implemented, the invention may be practiced without any filter at all.
Acceptance of a call setup message (CSM) by call pacing of the present invention, refers to the procedures applied to a CSM are performed, and in normal conditions, the CSM is forwarded to the next node of the path towards the destination DTE. Conversely, a CSM rejected by the call pacing of the present invention, refers to a message which is sent to the source DTE to notify that the connection request is refused, and the CSM is discarded.
As previously said, each time a connection between a source data transmission equipment (DTE) and a destination DTE is set up, the destination DTE sends to the source DTE an acknowledgment message (in the form of packets or cells). When the switching node in which the invention is implemented receives such an acknowledgment message, it is determined whether the corresponding connection call setup message has been processed by the call pacing system. This is done by reading a memory table containing identifiers or indicators of connections that were flagged after their call setup message was accepted and processed through the call pacing system (see
In
Following is a pseudocode illustrating the implementation of the call pacing system of the invention as illustrated in
Referring to
CIP Server Failure Example
In the situation where all the N DTEs are connected simultaneously to file server 43 and exchange data through N data connections, suppose that for a some reason CIP server 42 is brought down for a given time period and then back up. All of the N DTEs try to reconnect to the CIP server by establishing concurrently N control connections 44 and N user connections 45. As a consequence, there may be 2*N (where “*” stands for multiply) connection requests in a very short time frame. If number N is large (e.g., in the order of hundreds to thousands), there may be signaling congestion at access node 46 which attaches CIP server 42 to the network 40. In order to prevent such congestion, the call pacing system of the invention may be implemented at access node 46 at the Global Call Pacing point (see
File Server Failure Example
If the file server 43 is brought up and down after a breakdown has occurred, then, the file server is no longer registered to the CIP server, it must first register again. During that time when the file server 43 is not registered, all the DTEs requests to the CIP server 42 for the ATM address of file server 43 are responded to “negatively”. Once the file server 43 is registered again, all DTEs are able to get the ATM address of the file server 43 from the CIP server 42. Then, all DTEs 41 try to connect to file server 43 at the same time, which leads to a flow of N connection requests to file server 43 through access node 47 as shown in
In a typical ATM network values for G may range from tens to hundreds, and W1 and W2 may be in the order of few seconds, depending on the characteristics of the networks components (nodes and links) and on the attaching DTEs.
If both the CIP server and the file server attach to the network at the same access node then, to prevent signaling congestion in the above situations, there may be two independent call pacing procedures according to the invention within the same node. One call pacing filtering on the CIP Server 42 address and another filtering on the file server 43 address.
The call pacing system of the invention may also be used to ensure that a maximum number of users can connect to a server machine without any risk of signaling congestion. Then, given a time K required by the server to process an incoming connection request and a maximum number P of potential users that can request connection to the server in parallel (P may depend on the server processing power), the parameters of the call pacing system implemented in the switching node that attaches the server could be calculated as follows. Threshold Max would be set to the number P, and Window would be set to time value W such that: W is superior to P*K (where “*” stands for multiply).
The present invention may be used with any other protocol, such as for example the LE/FC (LAN Emulation Forum Compliant) protocol in an ATM network. It may also apply to any connection-oriented network where a plurality of data terminal equipment of any type demand access through the network to a common resource.
While the invention has been described in terms of a preferred embodiment, those skilled in the art will recognize that the invention can be practiced with variations and modifications. Therefore, it is intended that the appended claims shall be construed to include both preferred embodiment and all variations and modifications thereof that fall within the scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
98480059 | Aug 1998 | EP | regional |
This United States Patent Application is a Continuation of U.S. patent application Ser. No. 09/351,712 filed on Jul. 12, 1999, now issued as U.S. Pat. No. 6,633,539.
Number | Name | Date | Kind |
---|---|---|---|
4348554 | Asmuth | Sep 1982 | A |
4696028 | Morganstein et al. | Sep 1987 | A |
4737983 | Frauenthal et al. | Apr 1988 | A |
4757267 | Riskin | Jul 1988 | A |
4788718 | McNabb et al. | Nov 1988 | A |
5036535 | Gechter et al. | Jul 1991 | A |
5109404 | Katz et al. | Apr 1992 | A |
5164983 | Brown et al. | Nov 1992 | A |
5226075 | Funk et al. | Jul 1993 | A |
5271058 | Andrews et al. | Dec 1993 | A |
5282244 | Fuller et al. | Jan 1994 | A |
5291550 | Levy et al. | Mar 1994 | A |
5291552 | Kerrigan et al. | Mar 1994 | A |
5299259 | Otto | Mar 1994 | A |
5309513 | Rose | May 1994 | A |
5333133 | Andrews et al. | Jul 1994 | A |
5335233 | Nagy | Aug 1994 | A |
5381415 | Mizutani | Jan 1995 | A |
5442691 | Price et al. | Aug 1995 | A |
5465286 | Clare et al. | Nov 1995 | A |
5528678 | Kaplan | Jun 1996 | A |
5530744 | Charalambous et al. | Jun 1996 | A |
5541987 | Topper et al. | Jul 1996 | A |
5590188 | Crockett | Dec 1996 | A |
5592477 | Farris et al. | Jan 1997 | A |
5633924 | Kaish et al. | May 1997 | A |
5649108 | Spiegel et al. | Jul 1997 | A |
5689518 | Galand et al. | Nov 1997 | A |
5694407 | Glaise | Dec 1997 | A |
5715306 | Sunderman et al. | Feb 1998 | A |
5757895 | Aridas et al. | May 1998 | A |
5787160 | Chaney et al. | Jul 1998 | A |
5787163 | Taylor et al. | Jul 1998 | A |
5848143 | Andrews et al. | Dec 1998 | A |
5873130 | Lafferty | Feb 1999 | A |
5878130 | Andrews et al. | Mar 1999 | A |
5898691 | Liu | Apr 1999 | A |
6044072 | Ueda | Mar 2000 | A |
6072773 | Fichou et al. | Jun 2000 | A |
6141322 | Poretsky | Oct 2000 | A |
6169738 | Sriram et al. | Jan 2001 | B1 |
6172991 | Mori | Jan 2001 | B1 |
6212164 | Murakami et al. | Apr 2001 | B1 |
6330313 | Hunt | Dec 2001 | B1 |
6333931 | LaPier et al. | Dec 2001 | B1 |
6356629 | Fourie et al. | Mar 2002 | B1 |
6385449 | Eriksson et al. | May 2002 | B2 |
6542462 | Sohraby et al. | Apr 2003 | B1 |
6587436 | Vu et al. | Jul 2003 | B1 |
Number | Date | Country | |
---|---|---|---|
20040047288 A1 | Mar 2004 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09351712 | Jul 1999 | US |
Child | 10639825 | US |