The present invention relates generally to telecommunications systems and, more specifically, to a method of using proxy objects to dynamically manage resources in real-time.
Telecommunication service providers continually try to create new markets and to expand existing markets for telecommunication services and equipment. One important way to accomplish this is to improve the performance of existing network equipment while making the equipment cheaper and more reliable. Doing this allows the service providers to reduce infrastructure and operating costs while maintaining or even increasing the capacity of their existing networks. At the same time, the service providers are attempting to improve the quality of telecommunication services and increase the quantity of services available to the end-user.
A conventional switch or router (or similar telecommunication device) typically contains a large switching fabric controlled by a main processing unit (MPU) that contains a large number of data processors and associated memories, often in the form of ASIC chips. Each of these MPU processors contains a call process client application for controlling the flow of control signals of a single call. Each call process client application in turn communicates with a call process server application that controls the flow of control signals for a large number of calls.
Thus, when a particular event occurs during a phone call (e.g., the call set-up, the invocation of three-way calling, call disconnection, or the like), control signals associated with the event are relayed from the origination station to the call process client application in the switching center. This call processing client application then relays the control signals to the call process server application, which actually performs the call processing service requested by the control signals.
Unfortunately, in large capacity systems, bottlenecks may develop around the call process server applications. Each call process client application must communicate with a particular piece of server hardware that is executing the call process server application. Due to the random nature of the start and stop of phone calls, in large systems, some servers may be near capacity and develop bottlenecks, while other servers still have plenty of adequate bandwidth. Moreover, a system failure in a particular piece of server hardware results in the loss of all call processes being handled by a call process server application being executed on the failed server.
Moreover, upgrading the call process server applications in a conventional switching center without interrupting existing service is extremely complicated. In some prior art systems, performing a software upgrade requires fully redundant (duplex) hardware in the switching center. The redundant components are split into an active side and an inactive side. Complex control software is required to manage the split (by swapping active and inactive sides) and to manage the process of merging the two halves of the system back into a unitary system. The redundant hardware adds excessive cost to the prior art switching center and the complex control software is expensive to develop, susceptible to errors due to its complexity, and difficult to maintain.
Therefore, there is a need for improved telecommunication networks. In particular, there is a need for switches/routers that may easily undergo on-line. More particularly, there is a need for switches that may be upgraded without requiring redundant hardware and without requiring complex and expensive control software.
U.S. patent application Ser. No. 10/039,186, filed Dec. 31, 2001, entitled “SYSTEM AND METHOD FOR DISTRIBUTED CALL PROCESSING USING A DISTRIBUTED TRUNK IDLE LIST”, discloses improved switching devices that use mapped resource groups to distribute a variety of call processing tasks across a plurality of call application nodes (CANs) or servers. U.S. patent application Ser. No. 10/039,186 is assigned to the assignee of the present application and is hereby incorporated into the present invention as if fully set forth herein.
The present invention improves upon the devices disclosed in U.S. patent application Ser. No. 10/039,186 by introducing the use of proxy objects to manage application resources in the mapped resource groups.
The mapped resource groups of U.S. patent application Ser. No. 10/039,186 distribute work loads for a number of tasks, including conventional call processing (CP), identity server (IS), subscriber database (SDB), idle list server (ILS) (also called trunk idle list (TIL)). In an exemplary embodiment of the present invention, an idle list server (ILS) mapped resource group (MRG) distributes ILS applications across several call application nodes (CANs). Each idle list server application in a CAN manages a set of the trunk idle lists. Idle list servers may be added or taken away without having to perform any configuration. Proxy objects associated with each ILS application enable the primary member of each idle list server application to communicate within other ILS applications in the mapped resource group (MRG).
To address the above-discussed deficiencies of the prior art, it is a primary object of the present invention to provide a telecommunication device capable of handling call connections between calling devices and called devices. According to an advantageous embodiment, the telecommunication device comprises: 1) a main processing unit capable of executing a plurality of client applications, wherein each of the client applications is associated with one of the call connections; and 2) N call application nodes capable of executing server applications. A first primary server application is executed on a first one of the N call application nodes. The first primary server application comprises a first proxy object that enables the first primary server application to communicate with members of a first mapped resource group, wherein each member of the first mapped resource group provides services in response to requests from ones of the plurality of client applications.
According to one embodiment of the present invention, the first primary server application is associated with a similar second primary server application executed on a second one of the N call application nodes separate from the first call application node, the first and second primary server applications being members of the first mapped resource group.
According to another embodiment of the present invention, the second primary server application comprises a second proxy object that enables the second primary server application to communicate with the first mapped resource group.
According to still another embodiment of the present invention, the first primary server application is associated with a similar first backup server application executed on one of the N call application nodes separate from the first call application node, wherein the first primary server application and the first backup server application form a first primary-backup group in which state information associated with the first primary server application is mirrored to the first backup server application.
According to yet another embodiment of the present invention, the second primary server application is associated with a similar second backup server application executed on one of the N call application nodes separate from the second call application node, wherein the second primary server application and the second backup server application form a second primary-backup group in which state information associated with the second primary server application is mirrored to the second backup server application.
According to a further embodiment of the present invention, each client application sends a call process service request to the first mapped resource group and the first mapped resource group uses a Resource Key value to select one of the first and second primary server applications to service the call process service request.
According to a still further embodiment of the present invention, the first primary server application controls a first group of resources associated with the telecommunication device and the second primary server application controls a second group of resources associated with the telecommunication device.
According to a yet further embodiment of the present invention, the first proxy object is designated as a leader proxy object in the first mapped resource group.
In one embodiment of the present invention, the first proxy object is capable of reallocating at least one resource from the first group of resources to the second group of resources such that the second primary server application controls the at least one reallocated resource.
In another embodiment of the present invention, the first proxy object is capable of determining that a third primary server application has joined the first mapped resource group and, in response to the detection, is capable of reallocating a first resource from the first group of resources to a third group of resources controlled by the third primary server application and is further capable of a second resource from the second group of resources to the third group of resources.
Before undertaking the DETAILED DESCRIPTION OF THE INVENTION below, it may be advantageous to set forth definitions of certain words and phrases used throughout this patent document: the terms “include” and “comprise,” as well as derivatives thereof, mean inclusion without limitation; the term “or,” is inclusive, meaning and/or; the phrases “associated with” and “associated therewith,” as well as derivatives thereof, may mean to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, or the like; and the term “controller” means any device, system or part thereof that controls at least one operation, such a device may be implemented in hardware, firmware or software, or some combination of at least two of the same. It should be noted that the functionality associated with any particular controller may be centralized or distributed, whether locally or remotely. Definitions for certain words and phrases are provided throughout this patent document, those of ordinary skill in the art should understand that in many, if not most instances, such definitions apply to prior, as well as future uses of such defined words and phrases.
For a more complete understanding of the present invention and its advantages, reference is now made to the following description taken in conjunction with the accompanying drawings, in which like reference numerals represent like parts:
Telecommunication device 100 comprises main processing unit (MPU) 110, system manager node 1 (SYSMGR1), optional system manager node 2 (SYSMGR2), and master database 120. Telecommunication device 100 also comprises a plurality of call application nodes (CANs), including CAN1, CAN2, and CAN3, and a plurality of local storage devices (SDs), namely SD1, SD2, and SD3, that are associated with CAN1, CAN2 and CAN3. Master database 120 may be used as a master software repository to store databases, software images, server statistics, log-in data, and the like. SD1-SD3 may be used to store local capsules, transient data, and the like.
The system manager nodes 1 and 2, collectively, and CAN1-CAN3 execute a configuration management (CM) process that sets up each node with the appropriate software and configuration data upon initial start-up or after a reboot. Every node in the system also executes a node monitor (NM) process that loads software and tracks processes to determine if any process has failed. System manager nodes 1 and 2 execute a first arbitrary process, P1, and system manager node 1 also executes a second arbitrary process, P2.
In accordance with the principles of the present invention, call application nodes 1-3 (CAN1-CAN3) execute a number of call process (CP) server applications organized as primary and backup processes that are available as distributed group services to 1 to N call process client (CPC) applications, namely CPC APP1-CPC APPn in main processing unit 110. The N call application nodes (e.g., CAN1-CAN3) are separate computing nodes comprising a processor and memory that provide scalability and redundancy by the simple addition of more call application nodes.
Each of the N call process client (CPC) applications, namely CPC APP1-CPC APPn in MPU 110 handles the control signals and messages related to a single call associated with a mobile station. Each of CPC APP1-CPC APPn establishes a session with a mapped resource group, which assigns the call to a particular one of the primary-backup group call process server applications, CP1, CP2, or CP3. The selected call process server application actually performs the call process services/functions requested by the call process client application.
In the illustrated embodiment, three exemplary call process server applications are being executed, namely CP1, CP2, and CP3. Each of these processes exists as a primary-backup group. Thus, CP1 exists as a primary process, CP1p, and a backup process, CP1b. Similarly, CP2 exists as a primary process, CP2p, and a backup process, CP2b, and CP3 exists as a primary process, CP3p, and a backup process, CP3b. In the illustrated embodiment, CP1p and CP1b reside on different call application nodes (i.e., CAN1 and CAN2). This is not a strict requirement: CP1p and CP1b may reside on the same call application node (e.g., CAN1) and still provide reliability and redundancy for software failures of the primary process, CP1p. However, in a preferred embodiment of the present invention, the primary process and the backup process reside on different call application nodes, thereby providing hardware redundancy as well as software redundancy. Thus, CP1p and CP1b reside on CAN1 and CAN2, CP2p and CP2b reside on CAN2 and CAN3, and CP3p and CP3b reside on CAN3 and CAN1.
Together, CP1, CP2 and CP3 form a supergroup for load sharing purposes. Thus, CP1p and CP1b, CP2p and CP2b, and CP3p and CP3b are part of a first mapped resource group (MRG1), indicated by the dotted line boundary. Additionally, CAN1-CAN3 host three other mapped resource groups, namely, MRG2, MRG3, and MRG4. MRG2 comprises two idle list server (ILS) applications, namely ILS1 and ILS2. ILS1 exists as a primary process, ILS1p, on CAN2 and a backup process, ILS1b, on CAN3. ILS2 exists as a primary process, ILS2p, on CAN3 and a backup process, ILS2b, on CAN2. Similarly, MRG3 comprises two identity server (IS) applications, namely IS1 and IS2. IS1 exists as a primary process, IS1p, on CAN1 and a backup process, IS1b, on CAN2 and IS2 exists as a primary process, IS2p, on CAN2 and a backup process, IS2b, on CAN1. Finally, MRG4 comprises two subscriber database (SDB) server applications, namely SDB1 and SDB2. SDB1 exists as a primary process, SDB1p, on CAN2 and a backup process, SDB1b, on CAN3 and SDB2 exists as a primary process, SDB2p, on CAN3 and a backup process, SDB2b, on CAN2.
A group service provides a framework for organizing a group of distributed software objects in a computing network. Each software object provides a service. In addition, the group service framework provides enhanced behavior for determining group membership, deciding what actions to take in the presence of faults, and controlling unicast, multicast, and groupcast communications between members and clients for the group. A group utilizes a policy to enhance the behavior of the services provided by the group. Some of these policies include primary-backup for high service availability and load sharing for distributing the loading of services within a network.
Call process server applications, such as CP1-CP3, ILS1-ILS2, IS1-IS2, and SDB1-SDB2, located within a computing network provide services that are invoked by client applications, such as CPC APP1-CPC APPn. As shown in
It is important to note that while the call process client applications, CPC APP1-CPC APPn, are clients with respect to the call process server applications, CP1, CP2, and CP3, a server application may be a client with respect to another server application. In particular, the call process server applications CP1-CP3 may be clients with respect to the idle list server applications, ILS1 and ILS2, the subscriber database server applications, SDB1 and SDB2, and the identity server applications, IS1 and IS2.
A client application establishes an interface to the mapped resource group. When a new call indication is received by the client application, the client application establishes a session with the mapped resource group according to a client-side load sharing policy. The initial server (CP1, CP2, etc.) selection policy is CPU utilization-based (i.e., based on the processor load in each of the candidate CANs, with more lightly loaded groups selected first), but other algorithmic policies, such as round-robin (i.e., distribution of new calls in sequential order to each CAN) may be used.
The client application associates the session with the new call and sends messages associated with the call over the session object. The client application also receives messages from the primary-backup group via the session established with the primary-backup group. Only the primary process (e.g., CP1p) of the primary-backup group joins the mapped resource group (e.g., MRG1).
For a variety of reasons, the application containing the primary may be removed from service. With the removal of the primary member, the backup member becomes the primary and the server application is no longer a member of the mapped resource group. However, the client application(s) still maintain their session with the primary-backup group for existing calls. New calls are not distributed to the primary-backup group if it is not a member of the mapped resource group.
If the primary of the primary-backup group (PBG) should fail, the backup member is informed that the primary member has failed (or left) and then assumes the role of primary member. The responsibility for these actions must be performed by the server application. It is the responsibility of the group service to inform the backup member that the primary member has failed or left.
As part of an on-line software upgrade process, one or more applications containing primary-backup groups may be removed from service, brought down, and then brought back up using a new version of software code. These groups, if their interface has not changed, join the existing mapped resource group. When first started, it is required that the client interface be capable of throttling the call traffic to specific primary-backup groups. The traffic throttling is expressed as a percentage varying from 0% (no calls) to 100%. All new calls that would have been scheduled according to the scheduling algorithm are handled by this session. The throttling factor is initialized to 100% for any primary-backup group that joins the mapped resource group. During on-line software upgrades, the throttling factor is adjusted to start with the no-calls case for the new software version. Any client application for the mapped resource group may establish a session with a specific primary-backup group. The client may then change the throttling factor at any time. When the throttling factor is changed, all client session interfaces receive via multicast the changed throttling factor. As the throttling factor is increased, the call process server applications with the new software version may receive increasing amounts of call traffic.
Call processing communications from the client applications to the call processing server primary-backup groups must support a very high volume of calls. The group software utilizes an internal transport consisting of a multicasting protocol (simple IP multicast) and optionally a unicasting protocol. The unicasting protocol may be TCP/IP, SCTP, or other transport protocol. The multicast protocol is used for internal member communications relating to membership, state changes, and fault detection. In the absence of unicast transport, the multicast protocol is used for client/server communication streams. The unicast protocol, when provided, is used to provide a high-speed stream between clients and servers. The stream is always directed to the primary of a primary-backup group, which is transparent to both the call processing client application and the call process (e.g., CP1, CP2, CP3, ILS1, ILS2, IS1, IS2, SDB1, SDB2).
If the primary group member should fail, then the backup group member becomes the new primary member. According to the exemplary embodiment of the present invention, all transient call information during the fail-over period (the time between when the primary fails and the backup is changed to be the new primary) can be lost, but all stable call information is maintained by the backup.
Advantageously, the present invention has no limitations on the scalability of the system and the system size is hidden from both the primary-backup group server applications and call process client applications. The present invention eliminates any single point of failure in the system. Any failure within the system will not affect the system availability and performance.
New call application nodes (CANs) and additional primary-backup group server applications (e.g., CP1, CP2, CP3, ILS1, ILS2, IS1, IS2, SDB1, SDB2) may be added dynamically to the mapped resource groups and can start servicing new call traffic. Call process client applications are not affected by the additions of new servers. If a server should fail, its backup assumes responsibility for the load. This provides high availability for the servicing of each call and minimizes dropped calls.
A mapped resource group (MRG) handles distributed primary-backup groups in which each group manages a specific segment of the total resource and the total resource is mapped in a specific manner. A client application provides a key to the MRG client, which results in the selection of a specific primary-backup group. In the case of a distributed idle list server, MRG2 maps a client idle trunk request to a specific idle list server that manages the idle list.
In
MRG2 has a group leader, which is selected at run-time by a leader election algorithm. According to an exemplary embodiment, the first member of MRG2 is elected as the leader and remains the leader until the first member leaves MRG2, either voluntarily or through failure. When the leader leaves or fails, the leader election algorithm is performed and a new leader is elected among all active group members. The responsibilities of the MRG group leader include: 1) retrieving the persistent trunk group data from a network database; 2) partitioning the range of available trunk groups to each MRG member; 3) assigning trunk groups to MRG members according to the number of available MRG members; 4) receiving notifications of updates to the trunk group data from the network database and reassigning trunk groups to MRG members; 5) monitoring MRG membership for additions and removals (either due to voluntary leave or failure) and reassigning trunk groups to MRG members; and 6) monitoring for overload conditions or member-out-of-resource conditions and redistributing the available resources accordingly.
Due to the distributed nature of the idle list server, a client application needs a way to locate a specific server that manages the trunk group in which the client is interested. A Resource Key value is used to map a client request to a specific server that services the resource. Each client policy in each client application is informed dynamically at run-time of which MRG2 members are responsible for each resource. The client policy maintains a dictionary that is keyed by the Resource Key value, which is the equivalent of an ID. The Resource Key value is associated with a specific Client Interface value of a specific primary-backup group that is an MRG2 member. A Resource Range value is used to represent a contiguous range of Resource Key objects (e.g., trunk group ID 1-9). The Resource Range value may be used to efficiently reassign a group of resources to an MRG member.
As shown in
Next, a second idle list server (ILS) capsule is loaded (in a different CAN). A second primary-backup group (PBG2) is created and a primary member object (ILS2p) joins PBG2 group. Since ILS2p is the first member in PBG2, ILS2p is designated as the primary member. As PBG2 group is created, it is detected by the CM client in SYSMGR1. The creation event triggers the staffing of the second backup member, ILS2b.
As shown in
A third idle list server capsule (ILS3) is loaded (in a different CAN). A primary-backup group (PBG3) is created and a member object (ILS3) joins PBG3 group. Since ILS3 is the first member in PBG3, it is designated as the primary member. As the PBG3 group is created, it is detected by the CM client in the system manager node, SYSMGR1. The creation event triggers the staffing of the backup member, ILS3b.
As shown in
The foregoing description of the operation of the idle list server (ILS) primary-backup groups is by way of example only and should not be construed so as to limit the scope of the present invention. The identity server (IS) primary-backup groups, the subscriber database (SDB) primary-backup groups, and the call process (CP) primary-backup groups use proxy objects in a similar manner to manage resources in telecommunication device 100.
Although the present invention has been described with an exemplary embodiment, various changes and modifications may be suggested to one skilled in the art. It is intended that the present invention encompass such changes and modifications as fall within the scope of the appended claims.
The present invention is related to that disclosed in U.S. Provisional Patent Application Ser. No. 60/542,105, filed May 5, 2004, entitled “Use of Proxy Objects for Application Resource Management”. U.S. Provisional Patent Application Ser. No. 60/542,105 is assigned to the assignee of the present application. The subject matter disclosed in U.S. Provisional Patent Application Ser. No. 60/542,105 is hereby incorporated by reference into the present disclosure as if fully set forth herein. The present invention hereby claims priority under 35 U.S.C. §119(e) to U.S. Provisional Patent Application Ser. No. 60/542,105.
Number | Date | Country | |
---|---|---|---|
60542105 | Feb 2004 | US |