Revenue management system and method

Abstract
A revenue management system and method for telecommunication network use is disclosed. The revenue management system can be integrated with the internet protocol multimedia subsystem (IMS). The revenue management system and method can have a hardware and/or software revenue generation module or architecture, revenue capture module or architecture, revenue collection module or architecture, revenue analysis module or architecture, or combinations thereof.
Description
BACKGROUND OF THE INVENTION

Telecommunication network operators and service providers are currently implementing the internet protocol multimedia subsystem (IMS). IMS is a set of Internet Protocol (IP) standards for session-based control of multimedia and telephony services on any network type, including circuit switched networks, packet-switched networks, and the public switched telephone network (PSTN). IMS manages the communication, collaboration, and entertainment media over internet protocol. IMS enables users to access both content and other users in ways that are natural and intuitive.


IMS provides users with functionality that is not dependent on fixed or mobile networks and also retains existing protocols, including the session initiation protocol (SIP). The SIP is central to IMS. Originally developed for voice over Internet Protocol (VoIP), SIP enables multiple users to enter and exit at will an ongoing communications session (i.e. a connection between two or more communications terminals such as a mobile handset a content server, or a personal computer). Moreover, SIP enables users to add or remove media (voice, video, content, etc.) dynamically during a session and run multiple sessions in parallel.


IMS enabled services will include combinations of push-to-talk, click-to-dial, multi-player gaming, video telephony, SMS, dynamic push content, including file sharing, and video conferencing, and location-based commerce among other communication, collaboration, and entertainment services.


These services previously existed in independent silos: that is, users must exit one service (i.e., terminate a session) before they can access a new service (i.e., initiate a session). The routing, network location, addressing, and session management of IMS eliminates the walls of the silos to effect a so-called blended functionality that lets users move freely between networks and services while maintaining multiple concurrent sessions. In this way, IMS transforms a sequence of discrete communication events into a single shared communications environment.


For example, users will be able to select a communications mode (voice, voice with video, text-to-speech email, and so on) that best suits their circumstances while retaining the freedom to change that selection dynamically by adding a video stream midway through a voice call for example. Users will also be able to access familiar services on any device and via any network type, fixed or mobile. And they'll enjoy these freedoms along with new functionalities such as broader payment options, credit control presence management, and convenient connectivity to groups.


IMS also provides operators and service providers opportunities for cost reductions and revenue growth. They can expect cost reductions because IMS enabled services, unlike today's siloed services, do not require replication of every functionality: charging, routing, provisioning, and subscriber management, for example. Rather, IMS services can reuse the same functionality across all services, thereby accruing for their operators significant savings in capital and operational expenditures. Revenue growth through enabling enhanced services is IMS's other benefit. In this way, IMS is the panacea-in-waiting to communications and media companies, who face the threat of commoditization.


Telecommunication network Operators and service providers will need a convergent charging system to realize the value of IMS. Such a system—with its integrated view of the customer—is necessary to apply cross-service discounts on bundled offerings and other marketing promotions, as well as a single consolidated bill for each customer—even when services originate from multiple third-party providers.


Legacy billing applications have become increasingly inadequate to the demands of charging for IMS-enabled services as charging has undergone a profound transformation in recent years: from batch to real-time processing, from a back-office support function to a front-office mission-critical function, from a cost to be minimized to a strategic opportunity for revenue maximization.


Further, operators know that consumers have choices. In this environment, CSP's have difficulty remaining competitive if unable to maintain an uptime of at least 99.999%—so-called “five-nines” availability. Five-nines, which amounts to barely five minutes of downtime per year, is unprecedented in traditional billing.


As batch-processing systems, traditional billing vendors did not have to provide highly-available solutions. If the billing system failed during a batch run, the job could simply be restarted once the system became available. For this reason CSPs were forced to maintain separate systems to handle their prepaid and postpaid subscribers and services. Prepaid voice services were generally managed by the network equipment vendors, who traditionally provided prepaid solutions in the form of a service control point (SCP) or service node. These systems—built with the network in mind, especially prepaid voice—were designed to achieve the high-availability and low latency requirements of tier-1 service providers. However, this design focus, together with support for only very simple rating capabilities, resulted in these systems being much more restrictive than their postpaid counterparts.


Because no single system provided support for all the revenue management functions, CSPs have often had to deploy dozens of separate systems to support those functions. Different “stovepipe” systems managed prepaid and postpaid services, while still other systems managed services such as voice, data, content, and messaging. Such a multifarious environment has driven operational costs higher and hampered CSPs' ability to meet increasingly aggressive market requirements.


CSPs can no longer afford the operational excess of maintaining multiple systems: instead CSPs need a simple, convergent, and modular revenue management solution that delivers high performance and high availability as well as flexibility and scalability. The revenue management system must also meet the demands of consumer marketing-a complex function that increasingly entails bundled offerings, conditional multiservice discounts, highly segmented promotions, and revenue sharing across a multipartner value chain of content providers, service providers, and network operators.


Unlike telecommunications networks, which must route their transport (calls in circuit-switched networks and packets in packet-switched networks) in real time, legacy billing systems for telecommunications providers have customarily fulfilled a back-office function, batch processing records such as call detail records and IP detail records. If a billing system weren't available when scheduled to process a particular batch, engineers could fix the problem, then run the process a few hours behind schedule. In the worst-case scenario, customers' bills would arrive in their mail boxes a day or two later than usual. But new expectations of communications service users are now changing the rules of the billing game.


Today's users demand diverse payment options in line with their varied personal, business, and family needs.


Whereas some will continue to favor long-standing relationships in which they settle their accounts with operators in the traditional manner of postpayment via invoice, more and more users now require the freedom to prepay—perhaps by purchasing a prepaid card at a grocery store as credit towards service from potentially multiple CSPs over a period of time. Still other users want to pay for products and services as they consume them-so-called now-pay—by providing a debit- or credit-card number at the commencement of each transaction.


In the absence of a convergent real-time solution, CSPs have had to address the bang needs of their prepaid, postpaid, and now-pay customers by maintaining multiple, non-integrated billing and customer-care systems. Indeed, they've had no alternative because legacy billing systems were never designed to accommodate the transactional real-time requirements for prepay and now-pay services. And they certainly weren't built with the requisite low latency and five-nines availability that a revenue management system needs to process as many as several hundred million transactions per day in real time via a direct connection to the telecommunications network.


The absence of a billing system that could meet the high performance/low-latency and high-availability requirements for prepaid has imposed significant costs on CSPs since they were forced to maintain multiple separate systems for their prepaid/postpaid environments and services.


BRIEF SUMMARY OF THE INVENTION

A revenue management system and method for revenue management are disclosed. The revenue management system can be a network of computers, a single computer, a program on computer readable medium, software and/or hardware architecture, or combinations thereof. The revenue management system can be used, for example by telecommunication-network operators and service providers, to manage the use of and revenue generated by telecommunications networks. The telecommunications networks can be wired and/or wireless.


The revenue management system can perform convergent real-time charging for prepaid, postpaid, and now-pay telecommunication network user accounts. The revenue management system can manage revenues through the entire service cycle, from revenue generation to revenue capture to revenue collection to revenue analysis. The revenue management system can have a hardware and/or software revenue generation module or architecture, revenue capture module or architecture, revenue collection module or architecture, revenue analysis module or architecture, or combinations thereof. (Any elements or characteristics denoted or described as being modules, architectures, layers, or platforms, herein, can be any of the other of modules, architectures, layers or platforms.)


The revenue generation module or architecture can minimize delays of deployment of new-services on the telecommunication network. The revenue generation module or architecture can have GUI-based applications for rapidly provisioning, pricing, discounting, and managing all aspects of customer and partner relationships such as personalized promotions and revenue sharing.


The revenue capture module or architecture can leverages a high-performance and high-availability platform that converts all transactions into revenue with zero leakage from fraud or system downtime. The high availability platform further minimizes customer churn.


The revenue collection module or architecture can ensure accurate bills for postpaid accounts while collecting all prepaid and now-pay revenue in real-time. The revenue collection module can generate partner (e.g., business partner) statements and provide a real-time view of finances, for example, to suggest changes in marketing strategy.


The revenue analysis module or architecture can process the transactions that pass through the revenue management system and can provide data for predetermined mathematical functions (i.e., data analysis). The revenue analysis module can be used with IMS-enabled services.


The revenue management system can provide carrier-grade performance, high availability, unlimited scalability, flexibility to rapidly launch and manage IMS-enabled services, perform end-to-end revenue management, and combinations thereof.


The revenue management system can be a single convergent platform for service providers to manage revenue in real time across customer type, network, service, payment method and geography. The revenue management system can have high-performance, high-availability, and scalability, for example equal to that of a front-end carrier-grade network element, with the functionality and flexibility of a convergent revenue management system.


The revenue management system can be a unified system that manages revenue in real time across any customer type (residential or business), network type (packet- or circuit-switched), service type (voice, data, commerce, and so on), payment method (prepaid, postpaid, and now-pay), and geography (multiple currencies and tax regimens). After examining the implications of inadequate performance and availability in CSPs' billing and customer-care platforms. The revenue management system can have a revenue capture platform and an in-memory object store (e.g., TIMOS or other technology) for high performance/low latency and an active/active staged architecture for high availability.


The revenue management system can deliver carrier-grade performance, unlimited scalability, five-nines availability, the flexibility to rapidly launch and manage new services, and combinations thereof.


The revenue management system can give operators a unified view of their subscribers (e.g., rapid and organized viewing of database information for users across various networks). The revenue management system can be configured to analyze database data for market segmentation, to create discounts (e.g., multiservice discounts), and promote and deliver functionality, such as consolidation of all services onto a single bill.


The revenue management system can accurately manage multiple revenue touch points with end-customers, for example, to enable networks to provide and bill a variety of voice and multimedia services.


The convergent revenue management system can eliminating duplication and exploiting economies of scope, it can thus incur lower operational costs than multiple non-integrated systems. Such efficiencies can translate into substantial savings in resources, skills, training, hardware, etc. The convergent revenue management system can have greater flexibility and scalability than multiple stand-alone systems. The system can provide an integrated view of the customer with important functionality benefits such as the ability to apply cross-service discounts to bundled offerings and the capacity to generate a single bill for each customer, even when services originate from multiple providers.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates a variation of the revenue management system integrated into the IMS framework.



FIG. 2 illustrates a variation of the revenue management system with a network layer.



FIG. 3 illustrate a variation of the revenue management system.



FIG. 4 illustrates a variation of the revenue management system with exemplary load distributions.



FIG. 5 illustrates a variation of the revenue management system having multi-database subsystems.



FIG. 6 illustrates the set-up for benchmark testing of the revenue management system.





DETAILED DESCRIPTION


FIG. 1 illustrates that the revenue management system can be integrated (i.e., in data communication with) the IMS framework. Users can access IP-based services via any device and any access network through a standardized access point, the CSCF (call session control function) or SIP server. The CSCF sets up and manages sessions, forwarding messages and content from other users or from content and application servers. The CSCF works in partnership with the HSS (home subscriber service), which manages subscriber data and preferences, enabling users to find one another and access subscribed services. A CGF (charging gateway function) can mediate access to other operators' networks and support application for charging, provisioning, and customer service.



FIG. 2 illustrates that the architecture of the revenue management system can have a gateway layer (e.g., a AAA Gateway), a revenue capture layer, and a database and storage layer. The gateway layer can connect to the external network via a service platform such as HP OpenCall (from Hewlett Packard, Inc., Palo Alto, Calif.), which in turn can connects to a network switch.


The gateway layer can be an interface to the network layer. Connections to the network layer can be maintained via one, two or more AAA (authentication, authorization, accounting) Gateway managers. The AAA Gateway managers, which can include one primary and one or more idle-but-running back up, connect to the network SCP via TCP/IP, and manage a number of tasks. The tasks can include protocol translation, asynchronous interface, load balancing, service-level agreement (SLA) latency enforcement, failure detection, failure handling, failure recovery, and combinations thereof.


The protocol translation can provide high-speed translation from the protocol used by the network SCP to a communication protocol (e.g., Portal Communications Protocol (PCP)). The AAA Gateway can support the HP OpenCall's Message Based Interface (MBI) protocol, Diameter Charging, and PCP. The AAA Gateway can provide extension to support additional protocols.


In asynchronous connection to the SCP, requests can be received from the SCP and acknowledged. Following completion of the requested operation, the asynchronous interface of the AAA Gateway can send a response to the SCP with the final results.


The load balancing element can distribute requests evenly across the available Connection Managers using a round-robin algorithm.


The SLA enforcement can monitor and guarantee conformance to a service-level agreement's latency requirements.


The failure detection element can detect failures such as a broken link between a AAA Gateway and a connection manager in the revenue capture platform.


The failure handling element can provide an interim request storage facility for requests processed during back-end failures and pending recovery, and a degraded mode of operation for cases in which the back end is not available or simply not responding within the specified latency levels.


The failure recovery element can replay requests into the revenue capture platform following a failure.


When the call (or other connection) comes to the network, the SCP can query the AAA Gateway in order to grant the service (i.e. authorize the call). During the call the SCP keeps the revenue management system appraised of the call status by passing call-start and call-end requests-as well as Reauthorizing requests if the previously authorized quantity is close to exhaustion.


The AAA Gateway can convert the SCP requests into event data records (EDR). The AAA Gateway can then forward the EDR to a specialized processing pipeline-authentication, authorization, or accounting, for example, depending on the service and request type. The processing pipelines can contain a module that can call an API of the CM in the Revenue Capture Platform. This is a synchronous call that blocks processing until receipt of a response. The response can then undergo translation into the EDR, and the EDR can pass to the network output module, which can send the response back to the SCP.


This process can be monitored for latency by a timeout monitoring facility in the AAA Gateway. If the timeout facility detects an unacceptable latency, the timeout facility can pass the EDR to a timeout pipeline. The timeout pipeline can then execute business logic to handle the request in a degraded mode in order to ensure a response with required latency levels. The degraded mode can allow the timeout pipeline to make a decision on how to proceed based on a configurable set of rules. For example, if the request is for authorization of a local call, the rules might indicate approval by default following the timeout of such a request. A timed-out request for authorization of an international call, in contrast, might receive a default denial.


Two other pipelines—the exception pipeline and the replay pipeline—can lean up, store, and replay timed-out requests to prevent any revenue leakage. If a timeout was caused by a failure in the Revenue Capture Platform, the replay pipeline can read the replay log after the Revenue Capture Platform is back online and send it the logged requests. If a timeout happened for other reasons, the replay can start immediately.


The revenue capture layer can implement the authentication and authorization that is necessary for prepaid and now-pay transactions. The revenue capture layer can handle the accounting tasks of event rating and recording all transactions. FIG. 3 illustrates that the revenue capture layer can have one, two or more Connection Managers, Database Data Managers, and TIMOS (transactional in-memory object store) Data Managers, a high-performance in-memory store that can synchronize with the database. The elements of the revenue capture layer can be encompassed by the Revenue Capture Platform.


Each AAA Gateway manager can connect to one, two or more distinct connection managers via TCP/IP. As opposed to the primary/backup model, these two connections are always in use during normal processing. Initial requests to the CAIs are distributed evenly by a simple round-robin algorithm. Cross-machine distribution of the connections can provide fault-tolerance at the hardware level. (The number of Connection Managers could be determined by the operator's availability and scalability requirements).


The Connection Managers can rout requests to the appropriate TIMOS Data Manager or back-end Database Manager. The design of the revenue management system can provide time-sensitive requests such as authentication and authorization to be performed by accessing data from the high-speed in-memory TIMOS cache only. Accounting requests, which can tolerate higher latencies, can access both the TIMOS cache and the back-end database.


The system can be configured so non-real-time requests bypass the TIMOS Data. Non-real-time requests can include, for example, batch rating or billing jobs, or real-time requests that do not require millisecond-level response times, such as an account query by a customer service representative.



FIG. 4 illustrates a variation of the revenue management system with exemplary load distributions shown. The architecture of the system can have one, two or more TIMOS instances and their back-up counterparts. Each TIMOS instance can have three components: a reference object cache, a data migratory, and a transient object store.


The reference object cache can be a cache area for database objects such as customer account records, required for read-only reference during real-time authentication and authorization processes.


The data migratory can be a subsystem to fill the reference object cache from the database.


The transient object store can be an area used to store temporary objects for TIMOS-only use such as active-session objects and resource-reservation objects.


The TIMOS instances can serve distinct sets of the subscriber base. For example, approximately 50% of subscribers per instance for the minimal two-instance configuration shown in FIG. 4. Each primary TIMOS instance can run on an independent server with that same server running the back-up instance of another primary TIMOS instance.


Meanwhile, the Connection Managers can consult a directory server in order to route requests to the correct instance. The directory server can be configurable as a separate process or as a part of any TIMOS instance.


The TIMOS Data Managers in turn can connect to at least two Database Data Managers, both of which are active and can take over the workload of the other in the case of a failure. The Database Data Managers interface with the back-end relational database.


The database and storage layers can have one or more server clusters, cluster software, one or more storage area networks, and combinations thereof. The server cluster can be a configuration of at least two database servers, which process data for a single database. The cluster software can manage prepaid payment accounts (e.g., Oracle RAC (Real Application Cluster) cluster software or to execute with the same). The storage area network can support high-speed and high-availability disk storage.


The revenue management system can access a high-performance relational database such as Oracle RAC via a high-speed storage-area network. The system can utilize multithreading and TIMOS data management. TIMOS can access system memory (i.e. RAM). Requests for data in RAM can be processed much faster than requests for data in the disk-based database. Throughput and latency can be reduced compared to the relational database because of the following differences between TIMOS data management and the RDBMS:


TIMOS can store in-memory data and avoid the time delays of database access and the translation between a relational representation and the database's physical format.


The revenue management system employs internal search and storage algorithms that have been optimized for in-memory data, further reducing latencies.


Read-only requests for TIMOS-managed data can avoid round trips to the back-end database and subsequent disk storage, thereby avoiding multiple network hops and their associated latencies. The creation and update of transient objects can be performed entirely in memory by TIMOS, requiring no disk access operations.


The system can have a distribution of operations via a staged-availability architecture, an active/active redundancy configuration, and controllable system renewal.


The revenue management system can have staged-availability architecture that allow higher layers with very high availabilities to maintain system operation-in a degraded mode if necessary—in the event of a failure in a lower-layer component within the Revenue Capture Platform. For example, the Gateway layer can maintain service authorization availability if the primary AAA Gateway loses connectivity to its Connection Manager in the revenue capture layer. Even when operating in a degraded mode, the system can prevent revenue leakage by ensuring that all events are captured in a replay log and persisted to disk for durability. Use of the replay log can ensure that each event undergoes charging as soon as the system recovers.









TABLE 1







Layer Availability and Recovery










Layer
Percentage Availability







Network
99.999%



Gateway
99.999%



Revenue Capture
 99.95%



Database & Storage
99.999%










Table 1 illustrates the exemplary percentages for each of the revenue management system's layers. Because the AAA Gateway is designed to provide 99.999% availability for service authorization and is able to run in a degraded mode, service availability is significantly higher than the availability of the least-available component. The front-office (e.g., RAM) real-time processing can enable the high availability.


The system can have an active/active redundancy or an active/passive redundancy. The active/active redundancy can detect failures in components substantially immediately and automatically switch the load of the failed component and to its counterpart. The counterpart can assume the additional load of a failed component because the system can be configured (e.g., appropriately scaled) so nodes run sufficiently below capacity under normal operation and can therefore absorb an additional load during failover.


The AAA Gateways can divide traffic 50/50 between two active Connection Managers. Each connection manager can route the requests to the appropriate TIMOS Data Manager or Database Data Manager. Each cluster node can run at 40% capacity during normal operation. If one of the TIMOS Data Managers fail to respond to the Connection Managers, the system can automatically failover to a back-up instance of a TIMOS Data Manager that runs on the other cluster node.


Upon failover, the data migrator can begin to load the backup TIMOS cache with any reference data that had not been preloaded. Processing on the backup system can resume immediately after failover (e.g., the system need not to wait for completion of the data migration). If a request comes in to the back-up TIMOS DM for which the needed data has not yet been loaded into the TIMOS cache, the request can be passed on to the appropriate database DM. The timeout monitor can ensure that the response is made within the required latency limits, although the latency will be higher than for requests to a filled cache. In addition, the requested object can be cached as a side effect for a request to an un-cached object, for example, making subsequent requests for the same data much faster.


The system can support other types of failover. For example, if the connection between the AAA Gateway and a Connection Manager fails, the Connection Manager whose connection remains operable can assume the full load. Meanwhile, the AAA Gateway can automatically execute custom business logic if it does not receive a response from a Connection Manger within a specified latency. For example, if a Connection Manager failed to respond to a database-update request, the business logic can ensure that the AAA Gateway saves the request for subsequent processing once the system had recovered. Custom business logic can maintain operation—albeit in a degraded mode—under severe failure conditions that deny access to customer balance information.


High Availability at the database and storage layer can be supported by a combination of a Storage Area Network, a Cluster Server, and Oracle's RAC software. FIG. 4 illustrates a database configuration which can have at least two independent servers (e.g., RAC servers), for example serving distinct customer segments, located in different database schemas. Each RAC server can dedicated to one database schema. During normal operation, the traffic for both halves of the system can follow different paths and not interfere with each other. In a failure situation, Oracle can redirect the traffic to the remaining RAC server. Oracle RAC can ensure a smooth transition of the traffic to the remaining node.


Other optional approaches such as storage arrays and disk mirroring can provide additional resilience in the database and storage layers.


The revenue management system can have a controllable system renewal module, for example to further supplement the high availability. The controllable system renewal can be configured to cause the CSP to limit the lifetime of all system processes, with processes set to restart automatically at designated intervals. Controllable system renewal (i.e., similar to a scheduled failover) can censure that any cumulative errors that might otherwise endanger system stability cannot become critical. By detecting such errors in a relatively benign state, controllable system renewal can affords time for engineers to fix the source of the error accumulation. More importantly, the controllable system renewal module can ensure that unscheduled failovers, when they do occur, execute properly.


The Content Manager module can provide a secure billing interface to link operators with value-added service providers. The revenue management module can enable business partners access (e.g., through an internet or other GUI interface) to the revenue manager module's real-time functionality without the need for business partners to purchase and support a full system of their own.


The system can have flexible GUI applications for pricing management, customer management, partner management, and service enablement. For example, the system can have a Pricing Center/Management module. The pricing center/management module can have pricing management functionality, such as tools to quickly define a product and service catalog together with the associated rules for pricing and discounting.


The pricing management module can define pricing, promotions and service bundles with a unified pricing interface (e.g., one tool/one process) for any payment method. The pricing management module can use any attribute from within the rating record as part of the rating scheme. The pricing management module can support one-time non-recurring events (e.g., registration/cancellation charges, m-commerce, content, and various service usage) as well as prepaid support for recurring events of varying duration (e.g. weekly, monthly, multi-monthly, and annual events). The pricing management module can manage tiered, volume, and multi-service discounting options as well as user-defined discounting. The pricing management module can track time of day/week and special days. The pricing management module can group pricing options such as closed user groups and friends and family. The pricing management module can provide support for zone- and location-based pricing. The pricing management module can manage unlimited numbers of pricing metrics: transport based (per minute, per kilobyte, etc.), value-based (per ring tone, per game, per message, etc.), hybrid, or any metric that the CSP may wish to define in the future. The pricing management module can assign one or more balance impact to any number of balances assigned—monetary or non-monetary. The pricing management module can define proration rules. The pricing management module can define linkage between products and services to entries in the general ledger (G/L).


The system can have a customer management interface module. The customer management interface can support creation and management of customer and partner accounts, for example, natively within the revenue management system, via real-time or batch CRM/PRM integration, via integration with legacy applications, or combinations thereof.


The revenue management system can have other modules to activate, deactivate, provision, and maintain device-related information on services. For example, some services (e.g., GSM telephony) can be provisioned in real time and other services (e.g., high-speed Internet access) can have staged provisioning. The system can have one or more service manager modules to provide specific service management capabilities based on industry requirements for services and standards such as GPRS, GSM, WAP, LDAP, and SIM.


The revenue management system can support unlimited and near-linear scalability with little or no software modification and no loss of performance. As subscriber or transaction volume grows, operators can add capacity at any time through either vertical scaling (e.g., adding CPUs to an existing server) or horizontal scaling (e.g., deploying additional servers). With this additional capacity, the system's high performance and high availability can remain undiminished.


The operator can add the necessary hardware to support another TIMOS instance pair, for example, if growth in transaction volume approaches the capacity of existing TIMOS instances. The system is readily scalable by the addition of multiple databases such as Oracle RAC clusters, for example if TIMOS is not the limiting factor in the system's capacity. FIG. 5—an extension of the minimal configuration of FIG. 3—depicts a variation of multi-DB scalability.


The revenue management system can manage credit in a variety of customer-centric methods. For example, families can have separate pre-pay, and/or post-pay, and/or now-pay sub-accounts on the same family plan (e.g., if each member of the family wants a different payment scheme). Companies can divide accounts between personal and business use for the company's communications devices (e.g., an employee can make personal calls and business calls and be billed into separate accounts).


For service providers accustomed to billing via a monthly batch process that prepares, prints, and mails invoices to customers, customer-centric billing in the era of IMS means an end to business as usual. Instead, service providers must implement a more flexible real-time system that can manage a customer's credit and charge on the customer's terms, offering prepay and now-pay options as well as traditional postpaid invoicing.



FIG. 6 illustrates the configuration for a benchmarking test for the revenue management system. The test was conducted at Hewlett-Packard's laboratory in Cupertino, Calif. The test was performed on a single HP Superdome computer with 72 1-GHz CPUs partitioned into multiple domains. Test driver software running on an 8-CPU partition simulated an authentic traffic load (1.5 million prepaid subscribers) through the revenue management system. The Connection Manager and Database Data Manager each also ran on 8-CPU partitions, whereas a single instance of the transaction in-memory object store (TIMOS) data manager ran on a 16-CPU partition. An Oracle RDBMS ran on another 16-CPU partition.














TABLE 2









Average




Sessions per
Operations
authorization
Sessions per



second
per second
latency (ms)
second per CPU









179
494
34
9.0










Table 2 illustrates the benchmark test results. A session represents a user's access to the network from beginning to end. In the case of a prepaid voice call, for example, the session begins when, following authorization of the caller's payment, a callee answers the call. This session ends when the caller hangs up. In the case of, for example, a prepaid SMS message, the session, likely much shorter, begins immediately after payment authorization and ends once the message has been transmitted across the network.


Each session may comprise multiple operations. A prepaid voice call, for instance, typically comprises three operations: service authorization and, if granted, start accounting and stop accounting operations. A prepaid call may cause operations within the system, for example, for reauthorization and reservation of more minutes on the network in the case of long call durations. SMS messages generally require just two operations per message: authorization and stop accounting.


The system under test supported as many as 179 concurrent sessions per second—equivalent to 9.0 sessions per second per CPU—and 494 operations per second. Moreover, because the system is linearly scalable, the establishment of additional TIMOS instances and the inclusion of more CPUs can provide a proportionate performance increase to meet any conceivable load demand at five-nines service availability.


A scaled-up version of the benchmark test system can support tens of millions of subscribers. The average authorization latency in the benchmark test results is 34 milliseconds (i.e., a substantially instantaneous response).


It is apparent to one skilled in the art that various changes and modifications can be made to this disclosure, and equivalents employed, without departing from the spirit and scope of the invention. Elements shown with any embodiment are exemplary for the specific embodiment and can be used on other embodiments within this disclosure.

Claims
  • 1. A system comprising: a processor; anda memory coupled with and readable by the processor and having stored therein a sequence of instructions which, when executed by the processor, cause the processor to implement a revenue management system for a communications network, the revenue management system including a gateway layer coupled with the communications network, a revenue capture layer communicatively coupled with the gateway layer, and a database and storage layer communicatively coupled with the revenue capture layer, wherein the gateway layer: receives a request related to a communication on the communications network,performs authentication of the request and authorization for services to support the communication based on the request,converts the request to an event data record, andforwards the event data record based on the services authorized and a type of the request, andwherein the revenue capture layer includes a transactional in-memory object store including a read-only reference object cache of data objects from the database and storage layer, wherein the revenue capture layer: receives the event data record from the gateway layer, androutes the event data record to the transactional in-memory object store or the database and storage layer depending upon the type of the request and a configurable directory server consulted, wherein routing the event data record comprises routing real-time requests to the transactional in-memory object store and routing non-real-time requests to the database and storage layer.
  • 2. The system of claim 1, wherein the transactional in-memory object store includes a reference object cache of database objects from the database and storage layer.
  • 3. The system of claim 2, wherein the reference object cache comprises customer account records used for read-only reference during real-time processes.
  • 4. The system of claim 3, wherein the request related to the communication on the communications network comprises an authentication and authorization request and the real-time processes comprise authentication and authorization processes.
  • 5. The system of claim 1, wherein the transactional in-memory object store includes a data migratory object and wherein the data migratory object fills the reference object cache from a database of the database and storage layer.
  • 6. The system of claim 1, wherein the transactional in-memory object store includes a transient object store and wherein the transient object store includes temporary objects used only by the transactional in-memory object store.
  • 7. The system of claim 6, wherein the temporary objects include active-session objects.
  • 8. The system of claim 7, wherein the temporary objects further include resource-reservation objects.
  • 9. A method for managing revenue from services of a communication network, the method comprising: receiving at a gateway layer of a revenue management system a request related to a communication on the communications network;performing by the gateway layer of the revenue management system authentication of the request and authorization for services to support the communication based on the request;converting the request to an event data record by the gateway layer of the revenue management system;forwarding the event data record from the gateway layer of the revenue management system based on the services authorized and a type of the request;receiving the event data record at a transactional in-memory object store from the gateway layer, wherein a revenue capture layer includes the transactional in-memory object store including a read-only reference object cache of data objects from the database and storage layer; androuting the event data record to the transactional in-memory object store or the database and storage layer by the transactional in-memory object store by the revenue capture layer depending upon the type of the request and a configurable directory server consulted, wherein routing the event data record comprises routing real-time requests to the transactional in-memory object store and routing non-real-time requests to the database and storage layer.
  • 10. The method of claim 9, wherein the transactional in-memory object store includes a reference object cache of database objects from the database and storage layer.
  • 11. The method of claim 10, wherein the reference object cache comprises customer account records used for read-only reference during real-time processes.
  • 12. The method of claim 11, wherein the request related to the communication on the communications network comprises an authentication and authorization request and the real-time processes comprise authentication and authorization processes.
  • 13. The method of claim 9, wherein the transactional in-memory object store includes a data migratory object and wherein the data migratory object fills the reference object cache from a database of the database and storage layer.
  • 14. The method of claim 9, wherein the transactional in-memory object store includes a transient object store and wherein the transient object store includes temporary objects used only by the transactional in-memory object store.
  • 15. The method of claim 14, wherein the temporary objects include active-session objects.
  • 16. The method of claim 15, wherein the temporary objects further include resource-reservation objects.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of Provisional Application No. 60/694,743, filed 28 Jun. 2005, and 60/694,743, filed 28 Jul. 2005 which are hereby incorporated by reference in their entireties.

US Referenced Citations (251)
Number Name Date Kind
4430530 Kandell et al. Feb 1984 A
4831582 Miller et al. May 1989 A
4849884 Axelrod et al. Jul 1989 A
4868743 Nishio Sep 1989 A
4918593 Huber Apr 1990 A
4968873 Dethloff et al. Nov 1990 A
5006978 Neches Apr 1991 A
5010485 Bigari Apr 1991 A
5036389 Morales Jul 1991 A
5043872 Cheng et al. Aug 1991 A
5163148 Walls Nov 1992 A
5212787 Baker et al. May 1993 A
5220501 Lawlor et al. Jun 1993 A
5224034 Katz et al. Jun 1993 A
5241670 Eastridge et al. Aug 1993 A
5291583 Bapat Mar 1994 A
5295256 Bapat Mar 1994 A
5305389 Palmer Apr 1994 A
5313664 Sugiyama et al. May 1994 A
5386413 McAuley et al. Jan 1995 A
5426780 Gerull et al. Jun 1995 A
5448623 Wiedeman et al. Sep 1995 A
5448727 Annevelink Sep 1995 A
5450477 Amarant et al. Sep 1995 A
5452451 Akizawa et al. Sep 1995 A
5469497 Pierce et al. Nov 1995 A
5475585 Bush Dec 1995 A
5475838 Fehskens et al. Dec 1995 A
5483445 Pickering Jan 1996 A
5495609 Scott Feb 1996 A
5499367 Bamford et al. Mar 1996 A
5499371 Henninger et al. Mar 1996 A
5504885 Alashqur Apr 1996 A
5506966 Ban Apr 1996 A
5517555 Amadon et al. May 1996 A
5523942 Tyler et al. Jun 1996 A
5530853 Schell et al. Jun 1996 A
5544302 Nguyen Aug 1996 A
5548749 Kroenke et al. Aug 1996 A
5555444 Diekelman Sep 1996 A
5560005 Hoover et al. Sep 1996 A
5579375 Ginter Nov 1996 A
5590395 Diekelman et al. Dec 1996 A
5613012 Hoffman et al. Mar 1997 A
5615109 Eder Mar 1997 A
5615249 Solondz Mar 1997 A
5615362 Jensen et al. Mar 1997 A
5627979 Chang et al. May 1997 A
5644736 Healy et al. Jul 1997 A
5649118 Carlisle et al. Jul 1997 A
5666648 Stuart Sep 1997 A
5677945 Mullins et al. Oct 1997 A
5684965 Pickering Nov 1997 A
5694598 Durand et al. Dec 1997 A
5706516 Chang et al. Jan 1998 A
5717924 Kawai Feb 1998 A
5732400 Mandler et al. Mar 1998 A
5737414 Walker et al. Apr 1998 A
5745754 Lagarde et al. Apr 1998 A
5765159 Srinivasan Jun 1998 A
5778189 Kimura et al. Jul 1998 A
5797137 Golshani et al. Aug 1998 A
5799072 Vulcan et al. Aug 1998 A
5799087 Rosen Aug 1998 A
5806061 Chaudhuri et al. Sep 1998 A
5809503 Aoshima Sep 1998 A
5822747 Graefe et al. Oct 1998 A
5832068 Smith Nov 1998 A
5842220 De Groot et al. Nov 1998 A
5845206 Castiel et al. Dec 1998 A
5845274 Chadha et al. Dec 1998 A
5850544 Parvathaneny et al. Dec 1998 A
5852820 Burrows Dec 1998 A
5854835 Montgomery et al. Dec 1998 A
5864845 Voorhees et al. Jan 1999 A
5870473 Boesch et al. Feb 1999 A
5870724 Lawlor et al. Feb 1999 A
5873093 Williamson et al. Feb 1999 A
5875435 Brown Feb 1999 A
5883584 Langemann et al. Mar 1999 A
5884290 Smorodinsky et al. Mar 1999 A
5893108 Srinivasan et al. Apr 1999 A
5898762 Katz Apr 1999 A
5909440 Ferguson et al. Jun 1999 A
5913164 Pawa et al. Jun 1999 A
5915253 Christiansen Jun 1999 A
5920629 Rosen Jul 1999 A
5924094 Sutter Jul 1999 A
5937406 Balabine et al. Aug 1999 A
5960416 Block Sep 1999 A
5963648 Rosen Oct 1999 A
5966649 Gulliford et al. Oct 1999 A
5970417 Toyryla et al. Oct 1999 A
5974407 Sacks Oct 1999 A
5974441 Rogers et al. Oct 1999 A
5974506 Sicola et al. Oct 1999 A
5983223 Perlman Nov 1999 A
5987233 Humphrey Nov 1999 A
6005926 Mashinsky Dec 1999 A
6011795 Varghese et al. Jan 2000 A
6012057 Mayer et al. Jan 2000 A
6016341 Lim Jan 2000 A
6021409 Burrows Feb 2000 A
6032132 Nelson Feb 2000 A
6035326 Miles et al. Mar 2000 A
6041323 Kubota Mar 2000 A
6047067 Rosen Apr 2000 A
6047267 Owens et al. Apr 2000 A
6047284 Owens et al. Apr 2000 A
6058173 Penfield et al. May 2000 A
6058375 Park May 2000 A
6061679 Bournas et al. May 2000 A
6061763 Rubin et al. May 2000 A
6067574 Tzeng May 2000 A
6070051 Astrom et al. May 2000 A
6075796 Katseff et al. Jun 2000 A
6078897 Rubin et al. Jun 2000 A
6092055 Owens et al. Jul 2000 A
6112190 Fletcher et al. Aug 2000 A
6112304 Clawson Aug 2000 A
6141759 Braddy Oct 2000 A
6154765 Hart Nov 2000 A
6170014 Darago et al. Jan 2001 B1
6185225 Proctor Feb 2001 B1
6185557 Liu Feb 2001 B1
6223172 Hunter et al. Apr 2001 B1
6236972 Shkedy May 2001 B1
6236988 Aldred May 2001 B1
6243760 Armbuster et al. Jun 2001 B1
6266660 Liu et al. Jul 2001 B1
6311185 Markowitz et al. Oct 2001 B1
6311186 MeLampy et al. Oct 2001 B1
6314365 Smith Nov 2001 B1
6321205 Eder Nov 2001 B1
6341272 Randle Jan 2002 B1
6347340 Coelho et al. Feb 2002 B1
6351778 Orton et al. Feb 2002 B1
6356897 Gusack Mar 2002 B1
6377938 Block et al. Apr 2002 B1
6377957 Jeyaraman Apr 2002 B1
6381228 Prieto et al. Apr 2002 B1
6381605 Kothuri et al. Apr 2002 B1
6381607 Wu et al. Apr 2002 B1
6400729 Shimadoi et al. Jun 2002 B1
6400925 Tirabassi et al. Jun 2002 B1
6401098 Moulin Jun 2002 B1
6415323 McCanne et al. Jul 2002 B1
6427172 Thacker et al. Jul 2002 B1
6429812 Hoffberg Aug 2002 B1
6442652 Laboy et al. Aug 2002 B1
6446068 Kortge Sep 2002 B1
6477651 Teal Nov 2002 B1
6481752 DeJoseph Nov 2002 B1
6490592 St. Denis et al. Dec 2002 B1
6494367 Zacharias Dec 2002 B1
6529915 Owens et al. Mar 2003 B1
6532283 Ingram Mar 2003 B1
6553336 Johnson et al. Apr 2003 B1
6563800 Salo et al. May 2003 B1
6564047 Steele et al. May 2003 B1
6564247 Todorov May 2003 B1
6567408 Li et al. May 2003 B1
6658415 Brown et al. Dec 2003 B1
6658463 Dillon et al. Dec 2003 B1
6662180 Aref et al. Dec 2003 B1
6662184 Friedberg Dec 2003 B1
6678675 Rothrock Jan 2004 B1
6700869 Falco et al. Mar 2004 B1
6725052 Raith Apr 2004 B1
6735631 Oehrke et al. May 2004 B1
6901507 Wishneusky May 2005 B2
6907429 Carneal et al. Jun 2005 B2
6947440 Chatterjee et al. Sep 2005 B2
6950867 Strohwig et al. Sep 2005 B1
6973057 Forslow Dec 2005 B1
6999569 Risafi et al. Feb 2006 B2
7003280 Pelaez et al. Feb 2006 B2
7089262 Owens et al. Aug 2006 B2
7181537 Costa-Requena et al. Feb 2007 B2
7233918 Ye et al. Jun 2007 B1
7246102 McDaniel et al. Jul 2007 B2
7257611 Shankar et al. Aug 2007 B1
7391784 Renkel Jun 2008 B1
7395262 Rothrock Jul 2008 B1
7406471 Shankar et al. Jul 2008 B1
7729925 Maritzen et al. Jun 2010 B2
7792714 Mills et al. Sep 2010 B1
20010005372 Cave et al. Jun 2001 A1
20010025273 Walker et al. Sep 2001 A1
20010034704 Farhat et al. Oct 2001 A1
20010056362 Hanagan et al. Dec 2001 A1
20020059163 Smith May 2002 A1
20020073082 Duviller et al. Jun 2002 A1
20020082881 Price et al. Jun 2002 A1
20020087469 Ganesan et al. Jul 2002 A1
20020106064 Bekkevold et al. Aug 2002 A1
20030014361 Klatt et al. Jan 2003 A1
20030014656 Ault et al. Jan 2003 A1
20030061537 Cha et al. Mar 2003 A1
20030076940 Manto Apr 2003 A1
20030078056 Takatori et al. Apr 2003 A1
20030097547 Wishneusky May 2003 A1
20030105799 Khan et al. Jun 2003 A1
20030118039 Nishi et al. Jun 2003 A1
20030133552 Pillai et al. Jul 2003 A1
20030172145 Nguyen Sep 2003 A1
20030202521 Havinis et al. Oct 2003 A1
20030202638 Eringis et al. Oct 2003 A1
20040002918 McCarthy et al. Jan 2004 A1
20040018829 Raman et al. Jan 2004 A1
20040062106 Ramesh et al. Apr 2004 A1
20040073500 Owen et al. Apr 2004 A1
20040132427 Lee et al. Jul 2004 A1
20040153407 Clubb et al. Aug 2004 A1
20050018689 Chudoba Jan 2005 A1
20050026558 Stura et al. Feb 2005 A1
20050033847 Roy Feb 2005 A1
20050036487 Srikrishna Feb 2005 A1
20050065880 Amato et al. Mar 2005 A1
20050075957 Pincus et al. Apr 2005 A1
20050091156 Hailwood et al. Apr 2005 A1
20050107066 Erskine et al. May 2005 A1
20050113062 Pelaez et al. May 2005 A1
20050120350 Ni et al. Jun 2005 A1
20050125305 Benco et al. Jun 2005 A1
20050144099 Deb et al. Jun 2005 A1
20050187841 Grear et al. Aug 2005 A1
20050238154 Heaton et al. Oct 2005 A1
20060010057 Bradway et al. Jan 2006 A1
20060015363 Allu et al. Jan 2006 A1
20060035637 Westman Feb 2006 A1
20060045250 Cai et al. Mar 2006 A1
20060056607 Halkosaari Mar 2006 A1
20060148446 Karlsson Jul 2006 A1
20060168303 Oyama et al. Jul 2006 A1
20060190478 Owens et al. Aug 2006 A1
20060248010 Krishnamoorthy et al. Nov 2006 A1
20060251226 Hogan et al. Nov 2006 A1
20070091874 Rockel Apr 2007 A1
20070100981 Adamczyk et al. May 2007 A1
20070110083 Krishnamoorthy et al. May 2007 A1
20070133575 Cai et al. Jun 2007 A1
20070198283 Labuda Aug 2007 A1
20070288367 Krishnamoorthy et al. Dec 2007 A1
20070288368 Krishnamoorthy et al. Dec 2007 A1
20080033873 Krishnamoorthy et al. Feb 2008 A1
20080033874 Krishnamoorthy et al. Feb 2008 A1
20080040267 Krishnamoorthy et al. Feb 2008 A1
20080126230 Bellora et al. May 2008 A1
20080215474 Graham Sep 2008 A1
20080311883 Bellora et al. Dec 2008 A1
Foreign Referenced Citations (8)
Number Date Country
63402 Jul 1982 GB
WO 9504960 Feb 1995 WO
WO 9527255 Oct 1995 WO
WO 9634350 Oct 1996 WO
WO 9703406 Jan 1997 WO
WO 9852131 Nov 1998 WO
WO 2007002841 Jan 2007 WO
WO 2007016412 Feb 2007 WO
Related Publications (1)
Number Date Country
20070091874 A1 Apr 2007 US
Provisional Applications (2)
Number Date Country
60694743 Jun 2005 US
60703687 Jul 2005 US