This invention relates to resource allocation within a computer or other network. More particularly, the present invention relates to the allocation of bandwidth in a Digital Communications Network.
When a communication (e.g., a voice communication) is to be transported within network 100, the signal first travels from the associated CPE equipment (e.g., a private branch exchange or PBX) to node 102 over communication link (e.g., digital T1 carrier) 108. Before being transmitted over such a T1 carrier, the signal is sampled and converted to a digital signal. A common sampling rate used with voice communications is 8000 samples per second, with each digital sample represented by 8 bits of data. Thus, the data rate of the new digital signal is: 8000 samples/sec×8 bits=64,000 bits/sec. This technique is known as Pulse Code Modulation (hereinafter “PCM”) and is used extensively throughout the backbone of modern telephone systems.
Although no international standard has been adopted, the T1 carrier is one method of PCM used throughout North America and Japan. The T1 carrier is comprised of 24 channels of digital data multiplexed together. Digitally sampled data from each of the 24 channels are packaged into successive frames of 8 bits/channel×24 channels+an additional framing bit=193 bits. Outside of North America and Japan a similar standard, known as E1, is commonly implemented. E1 operates in a manner similar to T1 except that it uses 32, 8-bit data samples (i.e., 32 channels) instead of 24.
After the signal has been sampled, converted to a digital signal and transmitted over a T1 carrier, as described above, it is then transferred to an outgoing communication link 106. Because the transport protocol across communication link 106 is different than that used on communication link 108, the digital data samples are packaged according to the protocol used across communication link 106 (e.g., ATM) before being transmitted to node 104. Additionally, although PCM by itself provides for a data transfer rate of 64,000 bits/second, it is often desirable to further compress the digital PCM data in order to save bandwidth within the network. This can be accomplished using Digital Signal Processing (hereinafter “DSP”) resources associated with network node 102. For example, if the 64,000 bits/second PCM signal is compressed by a DSP resource at a compression ratio of 16:1, the resulting digital signal will be transmitted at 4,000 bits/second. This represents a significant reduction in required bandwidth across network 100 to transmit the same underlying signal. Such compression techniques are particularly useful in networks that are heavily loaded with network traffic. Examples of compression algorithms known in the art include the International Telegraph Union (hereinafter “ITU’) standards G.711, G.726, G.729-A, G.729, and G.728. Such compression resources may be associated with the ATM interfaces 108 and may operate under the control of a node controller 116 in each of the nodes.
However, there is a tradeoff between bandwidth savings over network 100 and the implementation of costly DSP resources at the nodes. In general, the higher the compression ratio required by the compression algorithm, the more DSP resources are used up processing the compression request over a given period of time. Thus, while a single DSP resource may process up to say 16 channels of data if no compression is used (i.e., in baseline PCM mode), it may be limited to 5 channels if data is compressed at 2:1, and only 2 channels if the PCM signal is compressed at 8:1. One factor behind this limitation is the limited period of time in which the DSP resource must compress the data within a T1 frame before it must move on to the next frame of data. Thus, the chosen compression ratio will have a significant impact on DSP resource usage.
Following compression (if used), the data samples are delivered through network 100 to node 104, where the data may be decompressed and passed on to other CPEs or another node. The system is bi-directional to ensure 2-way communication between the nodes.
One problem with the communication scheme adopted in network 100 occurs when communication link 106 becomes congested, that is, when there is no available bandwidth to support new incoming calls from a CPE coupled to node 102. Consider, for example, a situation where multiple calls being transported between nodes 102 and 104 are using all or almost all of the available bandwidth on communication link 106. If a high priority call (e.g., a 911 or other emergency call) is now received at network node 102, either of two scenarios is possible. First, the high priority call may be rejected (dropped) in the face of no available bandwidth. Second, rather than dropping the high priority call (clearly a least acceptable solution); the nodes may be configured to drop lower priority calls in order to free up bandwidth to accommodate the high priority call. Although this solution may allow the high priority call to proceed, it is less than satisfactory in as much as several existing calls may be dropped to support the one new call. What is needed, therefore, is a more robust mechanism for handling such situations.
In one embodiment, a network node is configured to negotiate for connections for high priority calls (e.g., voice calls) received at the node in the face of otherwise congested outbound communication links. The negotiation is conducted in a fashion that will preserve connections for existing calls associated with the node. For example, the negotiation may be conducted so as to cause one or more of the existing calls to consume less bandwidth over the outbound communication links than was consumed at a time prior to reception of the high priority calls. Such negotiations may be initiated depending on the availability of codec resources and/or compression schemes at the node.
Another embodiment provides a method of managing a communication link between nodes of a communication network so as to ensure connection availability for one or more high priority calls over the communication link through dynamic renegotiations of call parameters for existing calls (e.g., voice calls) transported over the communication link. The communication link may preferably support communications according to the Asynchronous Transfer Mode and the dynamic renegotiations may be negotiations of compression schemes for the calls. Such dynamic renegotiations may be supported according to codec availability (e.g., as determined according to profile information) at the nodes and may be accomplished through the exchange of OAM cells between the nodes. In such a scheme, the high priority calls may be determined as such according to database information regarding called numbers.
Still another embodiment provides a network having a number of nodes connected through one or more communication links and a resource manager configured to allocate bandwidth over the communication links to high priority calls received at one or more of the nodes without dropping existing calls within the network. The resource manager allocates bandwidth through dynamic renegotiations of existing bandwidth utilization within the network and may be a distributed resource among the nodes of the network. Preferably, the nodes each support multiple codec resources, which compress voice information transmitted over the communication link. The dynamic renegotiations are supported through the exchange of OAM cells between the nodes.
The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which:
Described herein is a method for accommodating high priority calls on a congested communication link of a wide area or other communication network. In the following discussion, examples of specific embodiments of the present invention are set forth in order to provide the reader with a through understanding of the present invention. However, many of the details described below can be accomplished using equivalent methods or apparatus to those presented herein. Accordingly, the broader spirit and scope of the present invention is set forth in the claims that follow this detailed description and it is that broader spirit and scope (and not merely the specific examples set forth below) that should be recognized as defining the boundaries the present invention.
The basic mechanism adopted in the present invention is easily understood with reference to
For example, in the network 100 illustrated in
To more fully appreciate the processes involved in the present scheme, it is helpful to understand how calls are handled in network 100 in accordance with the present invention. When a call is received from a CPE at node 102, it is mapped to an associated network address. For example, associated with controller 116 may be a database configured to provide appropriate mappings between dialed telephone numbers and network (e.g., ATM or Internet protocol (IP) addresses). An example of such a database 300 is shown in
For the case of ATM network 100, this network address is associated with a virtual circuit (e.g., a permanent virtual circuit or PVC) between network nodes 102 and 104, supported on communication link 106. Thus, when the incoming call is parsed at node 102, the dialed number may be extracted to determine the node for which the call is destined, according to the network address. In some cases, the dialed number and/or associated network address may be flagged as a high priority call. For example, the dialed number 911 may be flagged as a call of the highest possible priority (e.g., where more that two priority levels are available). In this way, incoming calls can be recognized as high priority or not.
As mentioned above, for voice calls a variety of compression schemes are available, depending upon the codec (coder-decoder) resources available at the end points of the communication link over which the call is transported. Each call (e.g., each inbound TI channel at node 102) may negotiate for a particular compression scheme to be employed at the time a connection is established, or a default compression scheme may be used where no negotiation takes place. In accordance with the present invention, each voice port supported by the ATM interfaces 108 has a profile defining the available codec resources for that port. Incoming calls are mapped to these voice ports for communications across communication link 106 and in one embodiment, ATM interfaces 108 may each have up to 24 voice ports.
The profiles of the voice ports may be established by a network manager at the time PVCs are set up within network 100. Alternatively, or in addition, profiles may be exchanged between ports as part of a call set up process in the case of switched virtual circuits (SVCs). On mechanism for the exchange of such profiles is the ATM Adaptation Layer type 2 (AAL2) protocol. Recently, the ATM Forum has promulgated standards document af-vota-0113.000, entitled “ATM Trunk Networking Using AAL2 for Narrowband Services” (February 1999). In that document, which is incorporated herein by reference as is set forth in its entirety, a scheme for selecting and managing encoding algorithms at nodes of an ATM network according to prearranged agreements is described. This scheme calls for the exchange of profile information in a manner suitable for use in accordance with the present invention.
By exchanging profiles, each node at an end of a communication link is aware of the codec (e.g., DSP) resources available at each corresponding voice port. Thus, the nodes can reach an agreement on which codec (i.e., which compression scheme) to use for a particular call. As noted above, compression schemes that provide high compression ratios tend to utilize less bandwidth than those that use low compression ratios do. So, assuming that sufficient codec resources are available, when an incoming high priority call is recognized, if there is insufficient bandwidth available on communication link 106 to accommodate the new call, node 102 can instruct node 104 to adopt new compression schemes on one or more voice channels (i.e., calls) so as to free up bandwidth on communication link 106 to support the new call. This process can be repeated, as required, until no available codec resources remain.
The renegotiation process described above may be implemented using OAM (operations, administration and maintenance) cells that are exchanged between nodes 102 and 104. OAM cells are often exchanged between nodes of an ATM network and are used to convey a variety of information. In accordance with the present invention, the OAM cells are configured with payloads (e.g., cell type, function type and/or function specific fields) that contain instructions for moving to different compression schemes, and acknowledgements thereto. The format and use of OAM cells are well known in the art and need not be further described herein. What is unique is the use of such cells (which may be transmitted from a high priority queue within a node so as to ensure rapid call handling) to negotiate compression schemes for a channel on the fly.
Thus, a scheme for bandwidth renegotiation for accommodating high priority calls has been described. Although discussed with respect to specific embodiments, however, the broader applicability of the present invention should not be limited thereby. For example, although discussed with respect to the negotiation of compression schemes on the fly, other call parameters or connection parameters could be so negotiated on the fly. Thus, this broader applicability of the present invention is recited in the claims that follow.
Number | Name | Date | Kind |
---|---|---|---|
4747130 | Ho | May 1988 | A |
4862452 | Milton et al. | Aug 1989 | A |
4955054 | Boyd, Jr. et al. | Sep 1990 | A |
4991169 | Davis et al. | Feb 1991 | A |
5150357 | Hopner et al. | Sep 1992 | A |
5224099 | Corbalis et al. | Jun 1993 | A |
5313454 | Bustini et al. | May 1994 | A |
5359592 | Corbalis et al. | Oct 1994 | A |
5367678 | Lee et al. | Nov 1994 | A |
5410599 | Crowley et al. | Apr 1995 | A |
5434981 | Lenihan et al. | Jul 1995 | A |
5440740 | Chen et al. | Aug 1995 | A |
5442789 | Baker et al. | Aug 1995 | A |
5452306 | Turudic et al. | Sep 1995 | A |
5457687 | Newman | Oct 1995 | A |
5497373 | Hulen et al. | Mar 1996 | A |
5515363 | Ben-Nun et al. | May 1996 | A |
5526344 | Diaz et al. | Jun 1996 | A |
5528595 | Walsh et al. | Jun 1996 | A |
5546395 | Sharma et al. | Aug 1996 | A |
5561663 | Klausmeier | Oct 1996 | A |
5594727 | Kolbenson et al. | Jan 1997 | A |
5602848 | Andrews et al. | Feb 1997 | A |
5615211 | Santore et al. | Mar 1997 | A |
5617423 | Li et al. | Apr 1997 | A |
5623491 | Skoog | Apr 1997 | A |
5625845 | Allran et al. | Apr 1997 | A |
5719865 | Sato | Feb 1998 | A |
5724513 | Ben-Nun et al. | Mar 1998 | A |
5734656 | Prince et al. | Mar 1998 | A |
5742596 | Baratz et al. | Apr 1998 | A |
5745490 | Ghufran et al. | Apr 1998 | A |
5748468 | Notenboom et al. | May 1998 | A |
5765032 | Valizadeh | Jun 1998 | A |
5771232 | Sinibaldi et al. | Jun 1998 | A |
5777984 | Gun et al. | Jul 1998 | A |
5793747 | Kline | Aug 1998 | A |
5835494 | Hughes et al. | Nov 1998 | A |
5838994 | Valizadeh | Nov 1998 | A |
5862211 | Roush | Jan 1999 | A |
5883804 | Christensen | Mar 1999 | A |
5894477 | Brueckheimer et al. | Apr 1999 | A |
5909443 | Fichou et al. | Jun 1999 | A |
5974033 | Kamiya et al. | Oct 1999 | A |
6002666 | Fukano | Dec 1999 | A |
6005868 | Ito | Dec 1999 | A |
6009507 | Brooks et al. | Dec 1999 | A |
6011780 | Vaman et al. | Jan 2000 | A |
6028858 | Rivers et al. | Feb 2000 | A |
6052375 | Bass et al. | Apr 2000 | A |
6058117 | Ennamorato et al. | May 2000 | A |
6069872 | Bonomi et al. | May 2000 | A |
6104721 | Hsu | Aug 2000 | A |
6118864 | Chang et al. | Sep 2000 | A |
6128301 | Bernstein | Oct 2000 | A |
6144637 | Calvignac et al. | Nov 2000 | A |
6157648 | Voit et al. | Dec 2000 | A |
6181694 | Pickett | Jan 2001 | B1 |
6266342 | Stacey et al. | Jul 2001 | B1 |
6272109 | Pei et al. | Aug 2001 | B1 |
6307866 | Hayter | Oct 2001 | B1 |
6311288 | Heeren et al. | Oct 2001 | B1 |
6337858 | Petty et al. | Jan 2002 | B1 |
6343326 | Acharya et al. | Jan 2002 | B2 |
6426955 | Gossett Dalton, Jr. et al. | Jul 2002 | B1 |
6449269 | Edholm | Sep 2002 | B1 |
6483835 | Tanigawa et al. | Nov 2002 | B1 |
Number | Date | Country |
---|---|---|
2200816 | Aug 1988 | GB |
9416528 | Jul 1994 | WO |