Call Admission Control Method

Information

  • Patent Application
  • 20090245129
  • Publication Number
    20090245129
  • Date Filed
    May 12, 2006
    18 years ago
  • Date Published
    October 01, 2009
    15 years ago
Abstract
A method and arrangement for controlling call admissions to a path in a packet-switched network. A restriction factor is determined on the basis of a load condition on the path as defined by indicators such as jitter and packet loss. The restriction factor is adjusted based on a previous load condition and a current load condition. By determining the load primarily based on measured jitter, the arrangement reacts to congestion proactively before experiencing packet loss. The arrangement may be implemented entirely in an edge node of the network, and thus does not require additional mechanisms within the network such as signaling and resource reservation.
Description
FIELD OF INVENTION

The present invention relates to the admission control of voice and multimedia calls onto packet-switched internet protocol (IP) networks in order to maintain call quality.


BACKGROUND ART

When real-time traffic such as voice or multimedia traffic is carried over a packet-switched core network, such as an internet protocol (IP) based network or a pure L2 Ethernet, some provision must be made to provide the necessary quality of service. For transmission over packet-switched IP networks, voice or multimedia data must be broken up into discrete packets, which may travel over different network paths to the final destination before being reassembled in the correct sequence. The transmission speed between any two end points can vary enormously depending on the amount and type of of traffic carried at any one time by the network, as well as the network design and capabilities. Previously, IP networks could offer only a “best effort” quality of service, where no differentiation was made between traffic types within a network element and packets were routinely dropped in the event of congestion.


More recently, IP networks are offering some quality of service guarantees. The Differentiated Services scheme (DiffServ) proposed by the Internet Engineering Task Force (IETF) enables traffic to be separated according to quality of service, and by using a marker in the header of each IP packet allows network routers to apply differentiated grades of service to various traffic streams. However, this concept cannot guarantee that resources are available throughout the network. In particular, while edge nodes to the core network are able to control the number of calls admitted to the network, congestion can still occur on bottleneck links in the uncontrolled core of the network.


This last problem can be alleviated by using reservation based protocols such as the resource reservation Protocol (RSVP) proposed in conjunction with the IETF Internet Integrated Services framework (IntServ). RSVP enables connections or resources on the Internet to be reserved in the nodes or routers along the transmission path. Specifically, a resource reservation message is issued by an edge node on receipt of a call establishment message. This message travels through the core of the network, and each router along the path examines the request and reserves the necessary resources. If resource reservation is successful, and the issuing edge node receives an acknowledgement back, the call establishment proceeds towards the remote edge node. However, it is clear that this mechanism requires per-flow states to be installed in each core network route and hence greatly increases the required complexity of all network nodes, not only at the edge of the network but also in the network core. In large networks, the overheads due to signalling and internal data processing can become unacceptable. Moreover, the time required for the resource reservation message to travel back and forth in the core network can significantly delay the call establishment message.


A third form of admission control utilises transmission quality information obtained for specific links within a packet switched internet protocol (IP) network to decide whether a call should proceed across this link.


EP-A-0 999 674 describes a mechanism wherein bandwidth is allocated to specific classes of traffic to ensure the required quality of service. The available bandwidth for a path is monitored and for each new voice or other delay sensitive call received. A signalling gateway determines whether the remaining bandwidth is sufficient to permit the call to proceed. While this method offers good quality of service for delay sensitive traffic, the ongoing bandwidth monitoring for specific paths and specific traffic classes involves a high level of data processing and signalling, which causes delays. Moreover, the allocation of bandwidth to specific traffic classes may also lead to the under-utilisation of a link.


WO 99/66682 describes an arrangement for controlling the connection of telephone calls over an IP network or over an alternative network. Quality of service statistics obtained from a network monitor are consulted using the destination of a call to be routed over the IP network. These statistics are obtained using test packets routed to the same destination. If these statistics match those desired for the call, the call is allowed to proceed over the IP network, otherwise it is routed over an alternative network. The transmission of a test packet delays call establishment because the test packet must travel through the network and back again before the decision to admit the call can be made. Moreover, since such test packets are typically marked by congested routers along the path, each core network router needs to be aware of the mechanism and be adapted to act accordingly.


EP-A-1 168 755 describes a call control mechanism that uses a monitoring mechanism to measures statistics for specific performance indicators over a link, such as packet loss and jitter. The monitoring mechanism proposed is the real time control protocol RTCP, which is used in conjunction with the real time protocol (RTP) for carrying real time traffic over an IP network. The statistics are obtained for a number of ongoing calls and averaged. When an incoming call is received, the obtained statistics relating to one or a combination of performance indicators are compared with a threshold level, and the call is routed over the IP network only when the measured performance indicators are below the threshold level. This approach leads to a fast response time, but this comes at the expense of instability, as the utilisation of the controlled link will oscillate heavily. Specifically, when a link enters the blocked state all calls are transferred to alternative unrestricted links. When the link is no longer congested, all incoming calls are again accepted until the link quality becomes unacceptable.


In the light of the prior art systems described above it is an object of the present invention to provide a method and arrangement of call control that alleviates the above problems.


It is a further object of the present invention to provide a method and arrangement of call control that is capable of reacting rapidly to any change in the congestion of a link while remaining stable.


It is a still further object of the present invention to provide a method and arrangement of call control that that offers a good utilisation of the network even in a situation of high congestion, that provides acceptable voice quality even on bottleneck links and that can be implemented entirely in an edge node of the network, i.e. it does not require additional algorithms or mechanisms within in the core network, such as signalling and resource reservation.


SUMMARY OF THE INVENTION

The above objects are achieved in a method and arrangement as defined in the appended claims.


More specifically, the invention resides in a method of controlling the admission of calls onto at least one path of a packet-switched network. The method includes the following steps: applying a restriction factor to calls using said path, the restriction factor restricting the number of new calls permitted to utilise the path to a first predetermined level and having a range of at least three possible values, and being set on the basis of a first level of traffic load on said path; measuring transmission performance indicators for ongoing calls on the path to determine a current level of traffic load on the path; determining an updated restriction factor on the basis of the determined current traffic load level and applying the updated restriction factor to calls using said path, this updated restriction factor restricting the number of new calls permitted to utilise said path to a second predetermined level.


The invention further resides in an arrangement for controlling the admission of calls onto at least one path in a packet-switched network. This arrangement includes a load management processor adapted to assign a restriction factor to calls using the path, this restriction factor being related to a first transmission load level on the path and having a range of at least three possible values, a call control processor adapted to restrict the number of new calls permitted to utilise the path to a predetermined level defined by the restriction factor; and a data measurement module that is in communication with the load management processor and is adapted to measure transmission performance indicators for ongoing calls. The load management processor is also adapted to determine a current transmission load level on said path on the basis of the measured transmission performance indicators and to update the restriction factor for said path on the basis of the current transmission load level.


By applying a restriction factor that has at least three possible values to call admission, arriving calls are not simply blocked when a link becomes congested, but instead only a fraction can be blocked according to the restriction factor value, which enables the traffic to be reduced or increased in a manner proportionate with the current load condition.


In accordance with a preferred aspect of the invention, a method of controlling the admission of calls onto a path in a packet-switched network is proposed which includes the steps of: applying a restriction factor to calls using the path, the restriction factor restricting the number of new calls permitted to utilise the path to a first predetermined level, and being set on the basis of a first level of traffic load on the path; measuring transmission performance indicators for ongoing calls on the path to determine a current level of traffic load on the path; determining an updated restriction factor using both the determined current traffic load level and the first traffic load level and applying this updated restriction factor to calls using the path, this updated restriction factor restricting the number of new calls permitted to utilise said path to a second predetermined level.


Similarly an arrangement for controlling the admission of calls onto a path in a packet-switched network, said arrangement is proposed, which includes: a load management processor adapted to assign a restriction factor to calls using the path, the restriction factor being related to a first transmission load level on the path, a call control processor adapted to restrict the number of new calls permitted to utilise the path to a predetermined level defined by the restriction factor; and a data measurement module that is in communication with the load management processor and that measures transmission performance indicators for ongoing calls. The load management processor is further adapted to determine a current transmission load level on the path on the basis of the measured transmission performance indicators and to update the restriction factor assigned to said path on the basis of both the current transmission load level and the first transmission load level on said path.


Adapting of the restriction factor while taking account of both the current and first or previous load condition as determined by the performance indicators allows any change in call admission to be implemented at a level that best suits the rate of change of congestion. Accordingly, a dramatic change from a very low load condition to a very high load condition will result in a different updated restriction factor from a change from a moderate load condition to a very high load condition. The system is thus inherently stable, yet able to react rapidly to changes in load on a network path while optimising the utilisation of the path.


Preferably, the load on a path is classified into load categories on the basis of the measured transmission performance indicators. Hence each load level is expressed as a category rather than a single performance indicator value. Essentially, the useful range of performance indicators is divided into sub-ranges, each sub-range corresponding to a single load category. The restriction factor adjustment is thus dependent on the difference between a current load category and a load category applied just previously.


The adjustment of the restriction is usefully performed with the aid of a table that contains restriction factor adjustment values, each of which is addressable with the current load level or category and the previous load level, or category.


In the preferred embodiment, the performance indicators used are jitter and packet loss.


In accordance with a still further aspect of the invention, a further method of controlling the admission of calls onto a path in a packet-switched network is proposed, which includes the steps of: restricting the number of new calls permitted to utilise the path to a first predetermined level on the basis of a level of traffic load on the path; measuring jitter and packet loss for ongoing calls on the path to determine a current level of traffic load on the path; determining whether packet loss is below a predefined level and ascertaining a current level of traffic load on the path on the basis of jitter alone when packet loss is below said predefined level.


Similarly, an arrangement for controlling the admission of calls onto at least one path in a packet-switched network is proposed, which includes: a load management processor adapted to determine a transmission load level on the path, a call control processor adapted to restrict the number of new calls permitted to utilise the path on the basis of the determined transmission load level; and a data measurement module in communication with the load management processor for measuring jitter and packet loss for ongoing calls. The load management processor is adapted to ascertain whether packet loss is below a predefined level, and to determine a current transmission load level on the path on the basis of the measured jitter alone when packet loss is below the predefined level.


Jitter has been found to be an effective early indicator of traffic load before packet loss becomes significant. Accordingly, utilising jitter measurements to determine the traffic load and control the call admission at these early stages of path congestion allows the system to respond rapidly and proactively to any change in the load condition before a significant degradation of call quality can be perceived.





BRIEF DESCRIPTION OF THE DRAWINGS

Further objects and advantages of the present invention will become apparent from the following description of the preferred embodiments that are given by way of example with reference to the accompanying drawings. In the figures:



FIG. 1 schematically illustrates an internet telecommunications network for transporting voice and multimedia applications,



FIG. 2 schematically illustrates the relationship between functional elements of the media gateways of FIG. 1,



FIG. 3 is a graph showing the relationship between jitter and traffic load on a link over an IP network,



FIG. 4 is a table defining load categories based on jitter and packet loss measurements



FIG. 5 is a table defining restriction factors for specific paths,



FIG. 6 is a flow chart of an algorithm for adjusting the restriction factor, and



FIG. 7 is a table of restriction factor adjustment values.





DETAILED DESCRIPTION OF THE DRAWINGS


FIG. 1 shows an exemplary arrangement for providing multimedia communication over an internet protocol (IP) or other packet switched network. In the illustrated arrangement a plurality of subscribers 301 are connected to a first access network 30, which provides access to an IP core network 50 using an IP media gateway 10, which acts as an IP network edge node. Another IP media gateway 20 connects a second access network 40 to the IP core network 50. This second access network 40 also is connected to a plurality of subscribers 401. It will be understood, that many other access networks may be connected to the IP core network via similar gateways or via alternative edge nodes. The subscribers 301 and 401 may include fixed line telephones, cellular telephones, PCs or other multimedia equipment. The arrangement and specifically the media gateways 10, 20 are capable of handling delay sensitive traffic, such as voice and video as well as non-delay sensitive data. In order to be able to handle delay sensitive traffic effectively, the system operates a differentiated services scheme (DiffServ) to enable such traffic to be identified and handled in a separate manner from non-delay-sensitive traffic. The differentiated services scheme is described in S. Blake et al., “An Architecture for Differentiated Services”, RFC 2475, December 1998. Such traffic is typically carried using the real time protocol RTP with the added mechanism provided by the real time control protocol (RTCP). RTP and the RTCP mechanism is described in H. Schulzrinne et al., “RTP: A transport protocol for real-time applications”, RFC 1889, January 1996.


Turning now to FIG. 2 there are shown various functional elements of the IP media gateway 10 of FIG. 1. The figure does not illustrate all elements of such an IP media gateway, but simply shows those elements relevant for the present invention. Moreover, the figure is not intended to provide a representation of the fixed structural relationship between these elements nor to limit the manner in which these functions can be implemented. The depiction of the various elements as separate entities is simply to facility the explanation of the invention. It will be understood by those skilled in the art that the various functions may be combined in a different manner or further separated without changing the nature of the operation.


In accordance with the present invention, the system illustrated in FIG. 2 takes measurements on various quality related measurements on ongoing active calls and uses these measurements to influence future routing decisions. More specifically, if the improvement or degradation of the load condition of a specific link is detected based on measurements taken on ongoing calls using this link in conjunction with previous load condition data, the number of new calls allowed to proceed over the link is restricted to a greater or lesser degree depending on the change in traffic load. In achieving this operation, two elements are central to the handling and routing of delay sensitive calls over the IP network 50. These elements are a load management processing module 101 and a call control module 106. These elements are depicted as separate and distinct elements, however, this is not intended to limit the implementation of these functions. The processing function of these elements may well be performed in the same central processing device with access to a storage medium.


The load management processing module 101 determines the load on each path utilised by delay sensitive traffic over the IP core network 50 and allocates a restriction to be applied to each path. On receipt of an incoming call, the call control processing module 106 fetches the relevant restriction factor allocated to the path required by the call and makes a decision on whether to allow call establishment to proceed based on this restriction factor.


In the preferred embodiment the restriction applied to the supervised paths is a percentage restriction, whereby n out of every 100 calls over the link is permitted to proceed. This is expressed by a restriction factor r, which has a value from 0 up to and including 100. The percentage restriction on a specific path is then represented by 100−r.


Other forms of restriction are also possible. These include restrictions on the traffic level or on the traffic rate. Under traffic level restriction, the traffic level on the supervised path will only be allowed to reach a certain level based on the measured load. Under traffic rate restriction, the number of calls that may be set up within a certain period is limited to a specific value based on the measured load condition.


Turning again to FIG. 2, the load management processing module 101 is in communication with a data measurement module 102. The data measurement module collects data relating to the quality of service (QoS) of ongoing calls. It will be understood that this module may also be combined in a single processing device together with the load management processing module 101. The QoS data is separated into specific paths, identified by the source and destination IP application addresses used for the calls. More specifically, the path is determined using the IP addresses of interfaces in the edge devices 10, 20 of the core network 50. The QoS data is obtained using the real time control protocol RTCP, which enables the gathering of statistics on various quality or load indicators, such as jitter and packet loss, of an RTP session, but also on an individual call basis. For the present invention, the load indicators used are jitter and packet loss. Jitter is the small or random variation in time of phase of a transmitted signal. It is expressed in time and represents the difference in the spacing between packets at the sending node to the spacing between packets at the receiving node. Packet loss data is derived from packet sequence numbers. It is expressed as a percentage value. Data relating to jitter and packet loss is derived from RTP counters. The data measurement module 102 samples jitter and packet loss data on all or a proportion of ongoing calls on supervised paths over a certain measurement period. At the end of this measurement period the data is sent to the load management processing module 101. Each measurement period lasts for around 5 seconds, but may be more or less depending on the load on the network and the processing delay that can be tolerated. Using the measurement data received, the load management processing module 101 classifies the monitored path according to specific load categories using information from load category data module 103. This is illustrated in FIGS. 3 and 4.


It is well known that packet loss has a severe effect on the perceived quality of a voice call. Jitter, on the other hand, while it can result in a loss of synchronisation at high levels, does not affect voice quality of a call. However, it has been found that jitter is nevertheless an early indicator of degradation in the transmission quality of a path, as a result of increased traffic load. This is illustrated in the graph of FIG. 3. This graph is a plot of jitter and packet loss against the percentage offered load on a link in an IP network. Jitter measurement has some deviation, so the graph shows the maximum and minimum values as well as the average jitter levels. It can be seen from the graph that jitter levels increase significantly well before 100% load is reached on the link. Packet loss, on the other hand, is very low up to this point and then suddenly increases. Jitter can thus provide an effective early warning of impending path quality degradation enabling the call control system to operate proactively before significant packet loss occurs and call quality deteriorates. More specifically, when the percentage packet loss is below a predetermined level, namely below 0.5%, and preferably below 0.1%, jitter can be used alone to determine the load conditions on the monitored path. Moreover, simulations have demonstrated that jitter is essentially independent of node parameters, at least for buffer sizes of 64 kB (around 500 packets, depending on packet length) and over.


Turning again to FIG. 3, the load has been divided into six categories, starting from “none”, through “slight”, “acc” (acceptable), “mod” (moderate), “bad” to “faulty”. The lower load categories from “none” to “moderate” are defined primarily by the jitter levels because the packet loss levels are essentially unchanged or negligible at these levels of load. Above this load level, the jitter is no longer reliable as sole indicator and packet loss is used to define the two highest load categories. The table in FIG. 4 shows the classification of a path into a specific load category on the basis of jitter and packet loss levels. The entry N/A indicates that the data is not applicable for this category of load. As already mentioned, all measurements are performed on a number of ongoing calls and the results averaged. In addition, it is possible to determine the load category according to jitter and packet loss measurements separately, and if they differ, to select the worst category.


The load category data module 103 of FIG. 2 contains the information of the table in FIG. 3. The load management processing module 101 uses this load category data to classify the supervised paths on the basis of the measured data relating to packet loss and jitter received. Using this load category, the load management processing module 101 determines the restriction factor, r, applicable to the supervised path and updates the current values of restriction factor and load category in a path restriction table 105. This table is illustrated in FIG. 5. This table provides a mapping between the remote and local IP application addresses, which define specific paths, and the current load category and restriction factor. The IP application address information may be the complete address or address prefixes that can be defined by operators. This information could also be grouped according to source and destination edge nodes in the IP core network.


The call control processing module 106 uses the current restriction factor applied to a specific path in the IP core network to decide whether ran incoming call should be allowed to proceed. More specifically, when an incoming call that requires routing over the IP core network is received by the call control processing module 106, this module 106 accesses the path restriction table 105 to retrieve the current restriction factor applicable to the link defined by the local and remote IP application addresses for the received call. This restriction factor is then used to determine a percentage probability that the call will be allowed to proceed using the relationship 100(1−r)%. For example, if a path has a restriction factor of 20, the call would have an 80% chance of being processed. The decision on whether to allow the call to proceed may be made by the call control processing module 105 simply by generating a random number between 1 and 100 and granting the call establishment if the generated number is over 20, for example.


The mechanism for determining the restriction factor is illustrated in FIGS. 6 and 7. As mentioned earlier, a current restriction factor is determined on the basis of the last obtained load indicators for the link, i.e. the jitter and/or packet loss, but also on the basis of the last applied load category.


Accordingly, a restriction factor is adjusted based on a new load category and the last applied load category. The adjustment values ADJ for the restriction factor is given in a table in FIG. 7. The columns in FIG. 7 relate to the new load category and the rows relate to the previous or old load category. The adjustment values ADJ in the table are added to the last applied (old) restriction factor to obtain a new restriction factor. It can be seen from the table that the adjustment values ADJ in the diagonal from top left to bottom right are small. These relate to the cases where the load category is unchanged. However, only when the load categories “acceptable” and “moderate” are sustained is the restriction factor also unchanged. This allows the operator to encourage the utilisation of a link by increasing the number of calls that may be carried by the link while the load conditions remain low. It will be understood that the choice of adjustment values in the table of FIG. 7 permits the operator to configure a large or small restriction change in dependence on the actual load condition of the path. In particular, the degree of change in restriction factor can be made dependent on the degree of change of the load condition and not simply on the absolute load condition. In other words, a load condition change from “slight” to “faulty” can generate a far greater adjustment in the restriction factor than a change from “bad” to “faulty”. This allows the system to respond rapidly to sudden changes in the call arrival rate while maintaining an optimal level of link utilisation and while remaining stable.



FIG. 6 is a flow diagram representing the process of adjusting the restriction factor for each specific path. This process starts at step 500 with the load management processing module 101 fetching the old restriction factor applied to the specific path from the path restriction table 105 (FIG. 5). At step 501, the load management processing module 101 fetches or receives the new measurement data relating to jitter and packet loss over the selected path from the data measurement module 102. At step 502, the load management processing module 101 determines the new load category from the measurement data using the data in table 103 (FIG. 4), and then selects the column of restriction factor adjustment table 104 (FIG. 7). At step 503, the load management processing module 101 fetches the old load category from the path restriction table 105 and uses this to select the appropriate row of the restriction factor adjustment table 104 (FIG. 7) to obtain the adjustment value ADJ. At step 504 it is ascertained that the new restriction factor will be within the allowable boundaries of ≧0 and ≦100. If this is true, the new restriction factor is calculated at step 505 by adding the adjustment value to the old restriction factor r=r+ADJ. Otherwise, the process goes to step 506 and the restriction factor is set at the appropriate boundary value, namely to 0 if r+ADJ<0 or to 100 if r+ADJ>100.


The restriction factor adjustment algorithm illustrated in FIG. 6 is repeated after each measurement period for each supervised path.


When a path is used for the first time, and no previous measurements, load category or restriction factor is available, it is assigned a default restriction factor of “0” permitting 100% of incoming calls to be routed over the path. In addition, the load category assigned to the path is “none”. This path as defined by the local and remote IP application addresses is also entered in the path restriction table 105 for use in the next measurement period.


The various tables described with reference to FIG. 2 have been described as separate elements, however, these tables could well be located on the same storage medium accessible by a central processing device.


In the above description, six load categories are defined. It will be understood, however, that more or fewer categories may be used depending on the available processing capability and reaction requirements of the system.

Claims
  • 1. A method of controlling the admission of calls onto at least one path in a packet-switched network, said method including the steps of: applying a restriction factor to calls using said path, said restriction factor having a range of at least three possible values and restricting the number of new calls permitted to utilize said path to a first predetermined level, and being set on the basis of a first level of traffic load on said path;measuring transmission performance indicators for ongoing calls on said path to determine a current level of traffic load on said path;determining an updated restriction factor on the basis of said determined current traffic load level; andapplying said updated restriction factor to calls using said path, said updated restriction factor restricting the number of new calls permitted to utilize said path to a second predetermined level.
  • 2. The method as claimed in claim 1, wherein said step of obtaining an updated restriction factor includes determining said updated restriction factor using both said determined current traffic load level and said first traffic load level.
  • 3. (canceled)
  • 4. The method as claimed in claim 1, wherein the step of measuring transmission performance indicators for ongoing calls on said path includes measuring said transmission performance indicators for ongoing calls for a predetermined time period and determining said current traffic load level at the end of said predetermined time period.
  • 5. The method as claimed in claim 1, wherein said step of determining an updated restriction factor further includes the step of adjusting said restriction factor by an adjustment value to obtain said updated restriction factor, said adjustment value depending on said first traffic load level and said current traffic load level.
  • 6. The method as claimed in claim 5, wherein said step of adjusting said restriction factor includes: utilizing a table containing restriction factor adjustment values, each value in said table being addressable with said first traffic load level and said current traffic load level; andadding said adjustment value to said restriction factor to obtain said updated restriction factor.
  • 7. The method as claimed in claim 1, wherein said calls carry delay- and/or loss-sensitive voice or multimedia data.
  • 8. The method as claimed in claim 1, wherein jitter is used as a transmission performance indicator.
  • 9. The method as claimed in claim 1, wherein the packet loss is used as a transmission performance indicator.
  • 10. The method as claimed in claim 1, wherein jitter and packet loss are used as transmission performance indicators and wherein jitter is used to determine the traffic load level when packet loss is below a predetermined level.
  • 11. The method as claimed in claim 1, wherein said traffic load level is expressed as one of a plurality of load categories, each load category being defined by a range of transmission performance indicator values and indicating a traffic load condition.
  • 12. The method as claimed in claim 11, wherein the load category of said path is determined separately using jitter and packet loss measurements, and the worst load category selected as the load category.
  • 13. The method as claimed in claim 1, wherein said restriction factor is a percentage restriction factor, which determines the percentage of calls permitted to use the transmission path.
  • 14. The method as claimed in claim 13, wherein the restriction factor r has a value from 0 up to 100 and wherein an average of 100−r new incoming calls out of 100 are permitted to use the monitored path.
  • 15. The method as claimed in claim 1, wherein said restriction factor is a traffic level restriction factor, which determines the maximum number of calls permitted to utilize the monitored path at any one time.
  • 16. The method as claimed in claim 1, wherein the restriction factor is a maximum traffic level increment restriction factor, which determines the number of calls that may be set up in a certain time period.
  • 17. An arrangement for controlling the admission of calls onto at least one path in a packet-switched network, said arrangement including: a load management processor for assigning a restriction factor to calls using said path, said restriction factor being related to a first transmission load level on said path and having a range of at least three possible values;a call control processor for restricting the number of new calls permitted to utilize said path to a predetermined level defined by said restriction factor; anda data measurement module in communication with said load management processor for measuring transmission performance indicators for ongoing calls;wherein said load management processor includes means for determining a current transmission load level on said path on the basis of said measured transmission performance indicators and for updating the restriction factor for said path on the basis of said current transmission load level.
  • 18. The arrangement as claimed in claim 17, wherein said load management processor includes means for updating the restriction factor to said path on the basis of both said current transmission load level and said first transmission load level on said path.
  • 19. (canceled)
  • 20. The arrangement as claimed in claim 17, further including a first storage medium in communication with said load management processor, said first storage medium containing data relating to said restriction factor; wherein the load management processor includes means for accessing the first storage medium using said current transmission load level and said first transmission load level, and for generating the updated restriction factor using data contained in said storage medium.
  • 21. The arrangement as claimed in claim 20, wherein said first storage medium comprises a table containing restriction factor adjustment values, said table being accessible by said load management processor; and wherein the load management processor includes means for addressing the table using said current load category and said previous load category.
  • 22. The arrangement as claimed in claim 20, further including a second storage medium for storing a table containing restriction factor values for each supervised path, said path restriction table being accessible by said load management processor and said call control processor; wherein said load management processor includes means for updating the path restriction table by replacing said restriction factor value with said updated restriction factor value; andwherein said call control processor includes means for consulting the Path restriction table upon receipt of an incoming call to obtain the restriction factor for the path required by said call.
  • 23. The arrangement as claimed in claim 22, wherein said load management processor also includes means for writing a current transmission load level for each path in said path restriction table.
  • 24. The arrangement as claimed in claim 22, wherein said first and second storage mediums are comprised in a single storage device.
  • 25. The arrangement as claimed in claim 17, wherein said load management processor and said call control processor are formed by a single processing device.
  • 26. The arrangement as claimed in claim 25, wherein said data measurement module is comprised within said single processing device.
  • 27. The arrangement as claimed in claim 17, wherein a transmission load level is expressed as one of a plurality of load categories, wherein each load category indicates a load condition on said path and is defined by a range of transmission performance indicator values.
  • 28. A method of controlling the admission of calls onto a path in a packet-switched network, said method including the steps of: restricting the number of new calls permitted to utilize said path to a first predetermined level on the basis of a level of traffic load on said path;measuring jitter and packet loss for ongoing calls on said path to determine a current level of traffic load on said path, wherein when packet loss is below a predefined level, the current level of traffic load on the Path is determined on the basis of jitter alone;applying a restriction factor to calls using the path, said restriction factor restricting the number of new calls permitted to utilize the path to a first predetermined level, and being set on the basis of the determined current level of traffic load on the path; anddetermining an updated restriction factor on the basis of the determined current traffic load level and applying the updated restriction factor to calls using the path, said updated restriction factor restricting the number of new calls permitted to utilize the path to a second predetermined level.
  • 29. (canceled)
  • 30. The method as claimed in claim 28, wherein said traffic load level is expressed as one of a plurality of load categories, wherein each load category is defined by a range of jitter or packet loss measurement values and defines a traffic load condition on said path, and wherein at least one load category is defined by a range of jitter measurement values alone.
  • 31. An arrangement for controlling the admission of calls onto at least one path in a packet-switched network, said arrangement including: a load management processor for determining a transmission load level on said path;a call control processor for restricting the number of new calls permitted to utilize said path on the basis of said determined transmission load level; anda data measurement module in communication with said load management processor for measuring jitter and packet loss for ongoing calls;wherein said load management processor includes means for ascertaining whether packet loss is below a predefined level, and for determining a current transmission load level on said path on the basis of said measured jitter alone when packet loss is below said predefined level;wherein said load management processor also includes means for assigning a restriction factor to calls using the path, said restriction factor being related to the determined transmission load level on the path; andwherein the call control processor also includes means for restricting the number of new calls permitted to utilize the path to a predetermined level defined by the restriction factor.
  • 32. (canceled)
  • 33. The arrangement as claimed in claim 31, wherein said load management processor also includes means for assigning an updated restriction factor to said path on the basis of said current transmission load level.
PCT Information
Filing Document Filing Date Country Kind 371c Date
PCT/EP2006/004486 5/12/2006 WO 00 6/3/2009