1. Technical Field
The present invention relates to bandwidth reservation in high speed packet networks and more particularly to a method and system for sharing reserved bandwidth between several virtual logical connections issuing from a same port attaching external devices.
2. Background Art
High Speed Packet Switching Networks
Data transmission is now evolving with a specific focus on applications and by integrating a fundamental shift in the customer traffic profile. Driven by the growth of workstations, the local area networks interconnection, the distributed processing between workstations and super computers, the new applications and the integration of various and often conflicting structures—hierarchical versus peer to peer, wide versus local area networks, voice versus data—the data profile has become more bandwidth consuming, bursting, non-deterministic and requires more connectivity. Based on the above, there is strong requirement for supporting distributed computing applications across high speed networks that can carry local area network communications, voice, video and traffic among channel attached hosts, business, engineering workstations, terminals, and small to intermediate file servers. This vision of a high speed multi-protocol network is the driver for the emergence of fast packet switching networks architectures in which data, voice, and video information are digitally encoded, chopped into small packets and transmitted through a common set of nodes and links. An efficient transport of mixed traffic streams on very high speed lines means for these new network architecture a set of requirements in term of performance and resource consumption which can be summarized in the paragraphs below, as follows:
(a) a very high throughput and a very short packet processing time,
(b) a very large flexibility to support a wide range of connectivity options,
(c) an efficient flow and congestion control; and
(d) dependent connections.
(a) Throughput and Processing Time:
One of the key requirement of high speed packet switching networks is to reduce the end to end delay in order to satisfy real-time delivery constraints and to achieve the necessary high nodal throughput for the transport of voice and video. Increases in link speeds have not been matched by proportionate increases in the processing speeds of communication nodes and the fundamental challenge for high speed networks is to minimize the processing time and to take full advantage of the high speed/low error rate technologies, most of the transport and control functions provided by the new high bandwidth network architectures are performed on an end to end basis. The flow control and particularly the path selection and bandwidth management processes are managed by the access points of the network which reduces both the awareness and the function of the intermediate nodes.
(b) Connectivity:
In high speed networks, the nodes must provide a total connectivity. This includes attachment of the user's devices, regardless of vendor or protocol, and the ability to have the end user communicated with any other device. The network must support any type of traffic including data, voice, video, fax, graphic or image. Nodes must be able to take advantage of all common carrier facilities and to be adaptable to a plurality of protocols. All needed conversions must be automatic and transparent to the end user.
(c) Congestion and Flow Control:
Communication networks have at their disposal limited resources to ensure an efficient packets transmission. An efficient bandwidth management is essential to take full advantage of a high speed network. While transmission costs per byte continue to drop year after year, transmission costs are likely to continue to represent the major expense of operating future telecommunication networks as the demand for bandwidth increases. Thus considerable efforts have been spent on designing flow and congestion control processes, bandwidth reservation mechanisms, routing algorithms to manage the network bandwidth. An ideal network should be able to transmit useful traffic directly proportional to the traffic offered to the network and as far as the maximum transmission capacity is reached. Beyond this limit, the network should operate at its maximum capacity whatever the demand.
(d) Dependent Connections:
Private Network (PN) and Value-Added Network (VAN) service providers usually build their networks upon carrier transmission facilities. The expense for carrier transmission facilities represents an important part (about 30% to 60%) of the PN's or VAN's total operating expense. As such, their profits are directly related to their ability to minimize monthly transmission expenses while continually meeting their customers' end-to-end communications needs. Nodal equipment that can utilize the transmission facilities more efficiently than traditional carrier systems are typically selected by PNs and VANs.
Today, the replacement of traditional Time Division Multiplex (TDM) equipment by high speed packet switching equipment has significantly reduced the amount of transmission facilities needed in a Private Network (PN) or in a Value-Added Network (VAN). But much like TDM networks, packet switching networks approach falls short of reducing the amount of transmission facilities (transmission trunks) required in the backbone network. The reason for this is that most packet switching network architectures assume that all incoming traffic streams are “independent” with respect to each other. That is, any customer device attached to the network can provide incoming traffic to the network at any instant of time. This is not true for logical virtual connections as in the case of Frame Relay (FR), Local Area Network (LAN), or Asynchronous Transfer Mode (ATM) traffic. In fact, the logical connections of a FR, LAN or ATM attached device must consider the traffic sources from all logical virtual connections on a given physical port as “dependent”. That is, given one logical virtual connection is bursting a traffic stream, no other logical virtual connection can be bursting at the same time.
For example, a network architecture such as NBBS (refer to IBM's publication entitled “Networking Broadband Services (NBBS)—Architecture Tutorial” IBM ITSC, June 1995 GG24-4486-00) reserves for virtual Frame Relay (FR) connections more bandwidth than necessary on the backbone network. This occurs because NBBS considers each Data Link Connection Identifier (DLCI) (refer to Frame Relay core aspects ANSI T1.618-1991 and ITU-T Q.922 Annex A) as an independent traffic generator, and reserves bandwidth on the backbone network accordingly.
R (Access Rate)=2 Mbps,
CIR (Committed Information Rate)=300 kbps,
Bc (Burst Committed)=4 kbytes
The bandwidth reserved on a trunk with a multiplexing buffer of 64 kbytes in order to guarantee a packet loss probability of ε=5×10−8, is approximately 700 kbps for each connection.
The bandwidth reserved by the NBBS architecture on a trunk to support a connection is defined according to the following equation:
where:
This example shows that the total bandwidth that should be reserved for 3 connections on Trunk 1 is about 2100 kbps (3 connections at 700 kbps). The value is higher than the access rate (R=2 Mbps) of the physical port (FR/ATM port A) supporting these connections. This situation is clearly not acceptable. In the simple case where a physical port is fully loaded with 7 connections (7×300 kbps=2 Mbps) and where all the connections issuing from said port are transmitted over a single trunk (Trunk 1), the bandwidth requirement is about 4.9 Mbps (7×700 kbps), while it is clear that the 2 Mbps stream produced by said port can be transmitted over a 2 Mbps trunk. Again, this occurs because NBBS considers each connection as an independent traffic generator and supposes that all connections can be bursty at the same time. Considering this worst case, NBBS reserves 4.9 Mbps to be sure to be able to transmit 2 Mbps.
An object of the present invention is to exploit the property of dependent virtual logical connections for saving bandwidth.
More particularly, the present invention is directed to a system and method in a packet switching communication network comprising a plurality of nodes interconnected with transmission trunks, of sharing reserved bandwidth between several connections issuing from a same physical port in an access node. Said system and method are characterized in that, on each trunk, an aggregate bandwidth is reserved for all connections issuing from a same physical port, said aggregate bandwidth being less than the sum of the bandwidth reserved for each connection considered individually.
In a network where the bandwidth reserved for each individual connection is equal to the equivalent capacity of the connection, the aggregate bandwidth reserved for all dependant connections is a function of:
High Speed Communications:
As illustrated in
High Performance Packet Switching Networks:
The general view in
Switching Nodes:
Each network node (201 to 208) includes a Routing Point, described hereinafter, where the incoming data packets are selectively routed on the outgoing Trunks towards the neighboring Transit Nodes. Such routing decisions are made according to the information contained in the header of the data packets. In addition to the basic packet routing function, the network nodes provide ancillary services such as:
According to the present invention, these ancillary services include:
(1) the storage within the node of alternate paths, and
(2) the updating of these paths.
Each Port is connected to a plurality of user processing equipment, each user equipment comprising either a source of digital data to be transmitted to another user system, or a data sink for consuming digital data received from another user system, or, typically, both. The interpretation of the users protocols, the translation of the users data into packets formatted appropriately for their transmission on the packet network 200 and the generation of a header to route these packets are executed by an Access Agent running in the Port. This header is made of Control, Routing and Redundancy Check Fields.
Using information in the packet header, the adapters 304 and 301 determine which packets are to be routed by means of the Switch 302 towards a local user network 307 or towards a transmission link 303 leaving the node. The adapters 301 and 304 include queuing circuits for queuing packets prior to or subsequent to their launch on the Switch 302.
The Route Controller 305 calculates the optimum paths through the network 200 so as to satisfy a given set of quality-of-services specified by the user and to minimize the amount of network resources used to complete the communication path. Then, it builds the header of the packets generated in the Routing Point. The optimization criterion includes the number of intermediates nodes, the characteristics of the connection request, the capabilities and the utilization of the links (Trunks) in the path, the number of intermediate nodes . . . The optimum route is stored in a Routing Database 308 for further reuse.
All the information necessary for the routing, about the nodes and transmission links connected to the nodes, are contained in a Network Topology Database 306. Under steady state condition, every Routing Point has the same view of the network. The network topology information is updated when new links are activated, new nodes added to the network, when links or nodes are dropped or when link loads change significantly. Such information is exchanged by means of control messages with all other Route Controllers to provide the up-to-date topological information needed for path selection (such database updates are carried on packets very similar to the data packets exchanged between end users of the network). The fact that the network topology is kept current in every node through continuous updates allows dynamic network reconfigurations without disrupting end users logical connections (sessions).
The incoming transmission links to the packet Routing Point may comprise links from external devices in the local user networks 210 or links (Trunks) from adjacent network nodes 209. In any case, the Routing Point operates in the same manner to receive each data packet and forward it on to another Routing Point is dictated by the information in the packet header. The fast packet switching network operates to enable a communication between any two end user applications without dedicating any transmission or node facilities to that communication path except for the duration of a single packet. In this way, the utilization of the communication facilities of the packet network is optimized to carry significantly more traffic than would be possible with dedicated transmission links for each communication path.
Network Management:
Network Control Functions
The Network Control Functions are those that control, allocate, and manage the resources of the physical network. Each Routing Point has a set of the foregoing functions in the Route Controller 305 and uses it to facilitate the establishment and the maintenance of the connections between users applications. The Network Control Functions include in particular:
Directory Services
Bandwidth Management
Path Selection
Control Spanning Tree
Topology Update
Congestion Control
The Topology Database contains information about nodes, links, their properties, and the bandwidth allocation. The topology information is replicated in each node of the network. An algorithm guarantees the correctness of each node's Topology Database when links and nodes are added or deleted or when their characteristics change. The database comprises:
The general organization of the Topology Database is shown in
503 the Link Physical Properties:
504 the Link State:
505 the Link Utilization:
Total Capacity (bps) C
The Topology Database contains, for each link, its Total Capacity. The value Ck represents the total bandwidth available on the link k between two nodes.
Reservable Fraction (%) rf
As might be expected, one of the critical characteristics of transmission links is the fraction of the link capacity effectively available. Links cannot be loaded up to a theoretical maximum load (bandwidth) for two reasons:
The reservable fraction of a link rf is the effective percentage of the Total Capacity Ck that can be reserved on the link k. to maintain a reasonable quality of transmission. If Ck is the Total Capacity of the link k, then Rk=rf×Ck is the Reservable Capacity of this link (Ĉk≦Rk≦Ck).
Total Reserved Equivalent Capacity (bps) ĈR,k
For a connection i on a link k, the simplest way to provide low/no packet loss would be to reserve the entire bandwidth requested by the user. However, for bursty user traffic, this approach can waste a significant amount of bandwidth across the network. To save resources, the bandwidth amount actually reserved is equal to an “Equivalent Capacity” Ĉk,i, Equivalent Capacity being a function of the source characteristics and of the network status. The bandwidth reservation falls somewhere between the average bandwidth required by the user and the maximum capacity of the connection.
The value
of the reserved Equivalent Capacities represents the total bandwidth reserved on the link k by N connections already established. If the difference between this already reserved link Equivalent Capacity ĈR,k and the Total Reservable Capacity of the link rf×Ck is less than the bandwidth requested by a new reserved connection then the link cannot be selected. However, the link may be selected for a non-reserved connection where no explicit bandwidth reservation is needed.
Total Bandwidth Used by Non-Reserved Traffic (bps) MNR,k
The value MNR,k represents the total load or bandwidth currently used by non-reserved traffic as measured on the link k.
Total Capacity Used (bps) ĈT,k
The Total Bandwidth Used ĈT,k on the link k is computed by adding the total reserved bandwidth ĈR,k and the measured bandwidth MNR,k used by non-reserved traffic.
Maximum Packet Size (Bytes) mpsk
mpsk is defined as the maximum packet size supported by the link k.
Bandwidth Management:
Users are requiring different quality-of-services. In order to provide the various service levels, different types of network connections are established. A connection is defined as a path in the network between the origin access node and the destination access node representing respectively the source user and the target user. Networks connections can be classified as reserved or non-reserved. Reserved network connections require bandwidth to be allocated in advance along the chosen path.
Most of the high speed connections are established on a reserved path to guarantee the quality of service and the bandwidth requested by the user. This path across the network is computed by the origin node using information in its Topology Database including current link utilization. The origin node then sends a reservation request along the chosen path, and intermediate nodes (if allowing the reservation) then add this additionally reserved capacity to their total. These changes are reflected in topology broadcast updates sent by the intermediate nodes. Intermediate nodes need not have an awareness of the status of each connection on their adjacent links. If an intermediate node does get too many packets, generally because of unanticipated burstiness, it simply discards them (the user can select a service that will recover from such discards).
Depending on the node type, the function of the Bandwidth Management is:
in the origin node,
in a transit node,
The connection set up and bandwidth reservation process, as shown in
The bandwidth reservation process is performed in the origin and destination nodes by Connection Agents (CA) and by Transit Connection Managers (TCMs) in the transit nodes along the chosen path.
Path Selection:
The purpose of the Path Selection process is to determine the best way to allocate network resources to connections both to guarantee that user quality of service requirements are satisfied and also to optimize the overall throughput of the network. The Path Selection process must supply to the requesting user a path over the network over which a point-to-point connection will be established, and some bandwidth will be reserved if needed. As shown in
The Path Selection process takes place entirely within the node wherein the connection is requested. It makes use of the Topology Database and selects the “best path” based on each of the following criteria in order of importance:
Quality-of-Service:
The connection's quality-of-service requirements are to be satisfied throughout the life of the connection. There are a large number of variables that determine the performance of a network. However, the quality-of-service can be defined as the set of measurable quantities that describe the user's perception of the service offered by the network. Some of the quality-of service parameters are listed below:
Some of these quantities have an effect upon how paths are computed, for example the packet loss probability or the end-to-end transit delay: the sum of propagation delays along a computed path may not violate the end-to-end transit delay specifications.
Minimum Hop:
The path shall consist of as few links as feasible to support the connection's quality of service requirements, thus minimizing the amount of network resources as well as processing costs to support the connection. The path computation is based on the links utilization at the time the connection is requested.
Load Balancing:
Among a minimum hop path, a path with “lightly loaded” links is preferred over a path with “more heavily loaded” links based on the network conditions at the time of path selection. The load of a link depend of the customer criteria: it can be an increasing function of the total reserved bandwidth of the link, proportional to the amount of traffic actually measured on the link, . . . When the path load (sum of the load of the links over the selected path) is the preponderant criterion of selection, the path of lesser load is chosen.
Satisfying the first requirement is the key factor in path selection and the other two functions are used to optimize traffic through the network.
Bandwidth Management According to Prior Art:
The bandwidth management of the NBBS (Networking BroadBand Services) architecture is described as an example of prior art. For simplicity, a single class of service is considered. However, it should be clear that the extension to multi-priorities is straightforward and is covered by NBBS (for more details about NBBS, refer to IBM publication entitled “Networking Broadband Services (NBBS) Architecture Tutorial”—IBM ITSC June 1995 GG24-4486-00).
Connection Metric:
Metrics are used to represent network connections with different characteristics. They are obtained from a model that captures the basic behavior of the data source associated with a connection. The behavior of any source can be modeled with a two state model: a source is either idle, generating no data, or active, transmitting data at its peak rate. The bit rate of a connection can therefore be represented by two states, namely: an idle state (transmitting at zero bit rate) and a burst state (transmitting at peak rate). A burst is defined to be a sequence of packets transmitted by the source into the network at its peak rate. The:
These three parameters are used to specify the bandwidth requirements for the network connection so that the appropriate path can be selected and sufficient bandwidth reserved. Additionally, these parameters are used by the Congestion Control function to monitor conformance of the network connection to its bandwidth reservation.
The variance of the bit rate is σ2i=mi(Ri−mi).
The quantities mi and σ2i provide indications of the mean bandwidth requirement, in bits per second, of a network connection and the magnitude of fluctuations around this mean value. The quantity b gives an indication of the duration of transmission bursts generated by the source. For the same utilization, a large b indicates that the source alternates between long burst and idle periods. A small b indicates that data is generated in short alternating burst and idle periods. Two sources with identical mean and peak bit rates but different burst periods, have different impacts on the network. For example, a long burst will have a major impact on queuing points in the network.
Equivalent Capacity:
The Equivalent Capacity of a network connection ci=(Ri, mi, bi), is defined as the minimum bandwidth needed on a link to support the connection assuming that no other connections are using the link.
where:
In case of a continuous bit stream connection,
ρi=1(mi=Ri), bi=∞ and ĉi=Ri.
Link Bandwidth Management:
A Link Metric vector is a triplet representing the aggregation of all the connections i traversing a link k (in NBBS, several link metrics are defined, corresponding to the different delay priorities, but as mentioned, the simplified assumption of a single class of service per trunk is used). Link Metrics vectors are distributed to other nodes via Topology Database (TDB) update messages. The Equivalent Capacity Ĉk associated with the aggregation of Nk connections established on the link Ĉk, combines two characteristics of the traffic of the network:
where the index i runs over the Nk connections already transported on the trunk.
Relation (2) can be written:
Lk={Mk,Sk2,Ĉk(N
where:
with:
Equation (4) provides a reasonably accurate estimate of the capacity required to support a given set of network connections.
The first function (Mk+αSk) relies on a Gaussian approximation to characterize the aggregate bit rates of all connections routed over a link k. This model capture the stationary behavior of aggregate bit rate and provides a good estimate for cases where many connections have long bursts periods and relatively low utilization. In such cases, individual network connections often require close to their peak, while the stationary behavior of their aggregation indicates that much less is in fact needed. The Gaussian assumption, however implies that the model may be inaccurate when used with a small number of high peak rate network connections.
The second function (sum of the individual equivalent capacities obtained for equation (1)) captures the impact of source characteristics, in particular the duration of the burst period, on the required bandwidth. This result is substantial capacity savings when the duration of the burst period is small.
From equation (4), it is seen that the equivalent capacity can be easily updated as new connections are added or removed, provided that the total mean and variance of the bit rate and the sum of all individual equivalent capacities are kept.
Path Selection:
Bandwidth request messages for routing new connections (Ri, mi, bi) and updating accordingly the Link Metric vectors, contain a request vector defined by:
ri=(mi,σi2,ĉi)
A Path Selection algorithm—a variant of the Bellman-Ford algorithm in a preferred embodiment—is then executed. The algorithm screens the network links which are defined in the Topology Database (TDB). For each link examined as a candidate for being part of the path, the Path Selection:
where:
Once a path has been selected in the network, the Connection Agent (CA) in origin node prepares a connection set-up message and sends it over the path, with a copy to every Transit Connection Manager (TCM) in transit nodes and to the destination Connection Agent (CA). Among other information, the connection set-up message includes:
Upon receiving the connection set-up message, the Transit Connection Manager (TCM) of link k executes several verifications, including bandwidth management. The TCM:
The object of the present invention is to establish new connections in the network while taking into account the dependence of these connection with other connections originated in the same port. For simplicity, a single class of service is considered. It should be clear that the extension to multi-priorities is straightforward. The Dependent Connection Bandwidth Management process according to the present invention is based on:
In the Route Controller of each node, a set of tables is defined, one for each port of the node. In particular for each port p a Dependent Connection Table DCTp as shown in
DCTp(k)={1k,Mk,Bk,Ek(N
where:
1. The Equivalent Capacity Ek(N
The equality occurs when a single connection (Nk=1) is issued from port p on link k.
2. The variance V2k of the bit rate of the aggregation of the Nk connections issued from this port p and transported on link k defined by:
Vk2=Mk(R−Mk) (14)
is always less than the sum of the variances of the bit rate of the Nk connections issued from this port p and transported on link k:
Path Selection:
The Dependent Connection Bandwidth Management (DCBM) modifies the Path Selection process as explained hereunder. For each link k examined as a candidate for being part of the path, the computations and verification take into account the entries in the Dependent Connection Table DCTp(k), and a boolean Idcbm initialized to “1” at the beginning of the path search:
As the result of each iteration, a link k_select is selected, and the boolean Idcbm is updated:
Idcbm=Idcbm AND Ik
The algorithm is executed for each link k if Idcbm AND Ik=1. The Path Selection process comprises the steps of:
where:
▪
M′k=Mk+mi
▪
Notes:
t1=C0−Nxm and t2=α2NxΔV2k are computed where:
The bandwidth which would be reserved on the link after the new connection has been accepted on this link is:
Ĉk2=min{(M′k,α,V′k,E′k(N
The link is able to handle the new connection if:
Ĉk2≦Ck0 (21)
If the link k is eligible, the link ability to support the new connection is estimated by the load balancing weight of the link:
where:
Among other information, the connection set-up message includes
Upon receiving the connection message, the Transit Connection Manager (TCM) associated to link k first tests the Boolean Ik.
As previously mentioned, the Dependent Connection Bandwidth Management (DCBM) algorithms are enabled thanks to a Boolean Ik defined for each link k stored in the Topology Database (TDB) within each network node. This Boolean has a local value, which means that, for a given link k, it can take different values in different nodes. In fact, the parameter Ik, is used to address “coincident path situations”.
Disconnected Trees:
1. The input to iteration N is the set of nodes located at a distance of N hops from the origin node.
2. The output of iteration N is the set of nodes located at a distance of (N+1) hops from the origin node, and the set of links—one link per node of the output set—, that link the input set to the output set.
The algorithm starts with the origin node A, and looks at all the nodes connected to the origin node. For each of these nodes, the algorithm selects the link with the largest rate and that connects it to one of the nodes at the origin node. For example:
1. At iteration 1 (see
2. At iteration 2, the set of nodes located at 2 hops from A is examined (nodes E, F, G, H). As far as node E, link E-B is preferred to link E-C because of its higher bandwidth. Similarly for nodes F, G, and H, links F-C, G-D, and H-D are selected.
Once the trees have been built, each link on each tree is marked with a link parameter Ik=1, and the remaining links in the network are marked with Ik=0. The link parameter is then used in the DCBM algorithm. The link marking is represented by bold lines in
Bandwidth Saving:
Referring back to the example illustrated in
1. The bandwidth manager according to prior art (NBBS), and
2. The Dependent Connection Bandwidth Manager (DCBM) according to the present invention.
This simple example shows that the increase in reserved bandwidth for establishing several dependent connections having the same characteristics, is decreasing with the number of connections. The asymptotic behavior can be seen on
1. The bandwidth management algorithms according to prior art (NBBS), and
2. The Dependent Connection Bandwidth Management (DCBM) algorithms according to the present invention.
With a bandwidth management according to prior art (NBBS), the amount of reserved bandwidth grows linearly, while with the Dependent Connection Bandwidth Management (DCBM), the bandwidth reservation is bounded by the port speed. More generally, the gain achieved by the present invention over the prior art (NBBS) is function of the connection metric. Let's assume that a port with access rate R carries N identical connections with mean rate m=R/N. If Cnbbs denotes the amount of bandwidth reserved by NBBS to carry the N connections on a single trunk, and Cdcbm denotes the amount of bandwidth reserved by the Dependent Connection Bandwidth Management (DCBM) to do the same job, the gain is defined by the ratio:
Ring Topology:
A case that perfectly fits with the trees proposal is the ring topology described in
It is clear that for a ring topology, the tree is the ring itself, and that no coincident path exists. Therefore, the gain in bandwidth saving is maximum on all trunks. For example, let's consider Trunk 1 which carries three connections from Host to Digital Terminal Equipment 1, 2, 3 (DTE 1, DTE 2, and DTE 3). Assuming that all connections are defined with the same bandwidth reservation, say 700 kbps, then only 1173 kbps needs to be reserved on Trunk 1. The prior art would have required to reserve 3×700=2100 kbps on Trunk 1.
The object of the present invention is to optimally share a reserved bandwidth on a trunk between several connections issued from the same port. The Dependent Connection Bandwidth Management (DCBM) exploits the dependent connection property of virtual logical connections, which demonstrates that it is not necessary to reserve more bandwidth than the port access rate.
Numerical examples on partial ring topologies, as they exist in networks under deployment, have demonstrated that the claimed method and system can achieve significant bandwidth savings.
The Dependent Connection Bandwidth Management (DCBM) reduces the bandwidth required in the backbone network, while still guaranteeing an end-to-end quality-of-service. Pure statistical multiplexing technique can result in even less bandwidth in the backbone network, however the quality-of-service is no more guaranteed. Therefore, the Dependent Connection Bandwidth Management (DCBM) should be considered as a complementary extension to the bandwidth management according to the prior art (NBBS), which reduces the bandwidth requirement close to a pure statistical multiplexing solution while still maintaining the quality-of-service.
The present invention is not limited to a specific protocol such as Frame Relay (FR) but can be used to improve any service offering that uses a shared access medium, like ATM.
Number | Date | Country | Kind |
---|---|---|---|
97480094 | Dec 1997 | EP | regional |
This application is a continuation of U.S. patent application Ser. No. 09/097,131, which was filed on Jun. 12, 1998, now issued as U.S. Pat. No. 6,647,008 issued on Nov. 11, 2003.
Number | Name | Date | Kind |
---|---|---|---|
5179556 | Turner | Jan 1993 | A |
5289462 | Ahmadi et al. | Feb 1994 | A |
5347511 | Gun | Sep 1994 | A |
5388097 | Baugher et al. | Feb 1995 | A |
5479404 | Francois et al. | Dec 1995 | A |
5548579 | Lebrun et al. | Aug 1996 | A |
5687167 | Bertin et al. | Nov 1997 | A |
5848055 | Fedyk et al. | Dec 1998 | A |
5884037 | Aras et al. | Mar 1999 | A |
5949758 | Kober | Sep 1999 | A |
6011804 | Bertin et al. | Jan 2000 | A |
6072773 | Fichou et al. | Jun 2000 | A |
6092113 | Macshima et al. | Jul 2000 | A |
6118791 | Fichou et al. | Sep 2000 | A |
6188698 | Galand et al. | Feb 2001 | B1 |
6388992 | Aubert et al. | May 2002 | B2 |
6424624 | Galand et al. | Jul 2002 | B1 |
6430155 | Davie et al. | Aug 2002 | B1 |
6512769 | Chui et al. | Jan 2003 | B1 |
Number | Date | Country | |
---|---|---|---|
Parent | 09097131 | Jun 1998 | US |
Child | 10348301 | US |