Gateway for achieving low latency and high availability in a real time event processing system

Information

  • Patent Grant
  • 8223777
  • Patent Number
    8,223,777
  • Date Filed
    Wednesday, November 15, 2006
    17 years ago
  • Date Issued
    Tuesday, July 17, 2012
    11 years ago
Abstract
Methods, machine-readable media, and apparatuses are disclosed for interfacing computer networks. According to one embodiment, a method for interfacing a first network using a first protocol with a second network using a second protocol can comprise receiving an event in the form of a first message from the first network, where the first message is encoded using the first protocol. The first message can be translated into a second message, where the second message is encoded using the second protocol. The second message can be transmitted to the second network. If a response is not received from the second network within a configurable interval, the event can be processed based upon at least one rule that is responsive to the event. A third message can then be transmitted to the first network, where the third message is responsive to the first message and is encoded in the first protocol.
Description

This application is also related to the following commonly-owned, co-pending applications (the “Related Applications”), of which the entire disclosure of each is incorporated by reference:


U.S. patent application Ser. No. 09/569,097, filed May 14, 1997, by Owens et al. and entitled “Method and Apparatus for Providing a Clean Accounting Close for a Real-Time Billing System”; U.S. patent application Ser. No. 10/706,151, filed Mar. 30, 2000, now U.S. Pat. No. 7,395,262, issued Jul. 1, 2008, by Rothrock and entitled “Techniques for Searching for Best Matches in Tables of Information”; U.S. patent application Ser. No. 09/562,785, filed May 2, 2000, now U.S. Pat. No. 7,257,611, issued Aug. 14, 2007, by Shankar et al. and entitled “Distributed Nonstop Architecture for an Event Processing System”; U.S. patent application Ser. No. 09/617,590, filed Jul. 18, 2000, now U.S. Pat. No. 7,233,918, issued Jun. 19, 2007, by Ye et al. and entitled “Rating Billing Events in Real Time According to Account Usage Information”; U.S. patent application Ser. No. 09/967,493, filed Sep. 27, 2001, now U.S. Pat. No. 7,406,471, issued Jul. 29, 2008, by Shankar et al. and entitled “Scalable Multi-Database Event Processing System Using Universal Subscriber-Specific Data and Universal Global Data”; U.S. patent application Ser. No. 10/394,409, filed Mar. 21, 2003, by Labuda et al. and entitled “Transaction in Memory Object Store”; U.S. patent application Ser. No. 11/415,759, filed May 1, 2006, now U.S. Publication No. 2006/0248010, published Nov. 2, 2006, by Krishnamoorthy et al. and entitled “Revenue Management Systems and Methods”; U.S. patent application Ser. No. 11/478,558, filed Jun. 28, 2006, now U.S. Publication No. 2007/0091874, published Apr. 26, 2007, by Rockel et al. and entitled “Revenue Management System and Method”; and U.S. patent application Ser. No. 11/496,057, filed Jul. 28, 2006, now U.S. Publication No. 2007/0198283, published Aug. 23, 2007, by Labuda et al. and entitled “Revenue Management System and Method.”


BACKGROUND OF THE INVENTION

Embodiments of the present invention relate generally to computer gateways. More particularly, embodiments of the present invention relate to a gateway for achieving low latency and high availability in a real time event processing system.


In computer parlance, a gateway interfaces two or more networks that use different communications protocols (for the purposes of this description, a “network” may comprise one or more computing devices). A gateway may be implemented in hardware or software, and performs the tasks of receiving a network message in the protocol language of the “source” network, translating the message to the protocol language of the “destination” network, and then transmitting the translated message to the “destination” network. Gateways are commonly used, for example, to interface local area networks (IPX/SPX protocol) to the Internet (TCP/IP protocol).


Gateways are also used in the context of event processing systems. Service providers in the telecommunications and media sectors operate such systems to manage business events involving subscriber access, billing, and account settlement. These systems may include a “satellite” network that tracks subscriber actions and generates business events based on those actions, and a “management” network that processes the business events and sends responses to the satellite network. A gateway is employed to enable inter-network communication between the two.


Recently, service providers have sought to improve the performance of their event processing systems to meet new market demands. One such market demand is the option of prepaying for services. Traditionally, a subscriber is granted access to a service and is billed at the end of a periodic cycle for her usage over the preceding period. This is known as postpaid billing. However, a subscriber may prefer to prepay for a fixed quantum of service access in order to, for example, budget her usage, or avoid the inconvenience of a monthly bill or long-term service contract. Additionally, service providers may, at times, choose to charge in a prepaid fashion even for postpaid customers in order to limit credit exposure or fraud.


A system for managing prepaid service access should provide real time tracking and settlement of subscriber authorization, authentication and accounting events. In turn, the gateway that sits between the “satellite” and “management” networks should provide for (1) low latency in relaying events and responses between satellite and management networks; and (2) high availability of the management network. However, current state-of-the-art gateways are not adapted to meet these demands. For example, current gateways do not have a mechanism for providing that time-sensitive network messages sent from a source network (such as a service provider's satellite network) to a destination network (such as a service provider's management network) are responded to within a given time interval. They also do not support multiple, load-balanced connections to a destination network.


Hence, there is a need in the art for an improved gateway that can achieve low latency and high availability in a distributed system such as a real time event processing system.


BRIEF SUMMARY OF THE INVENTION

Methods, machine-readable media, and apparatuses are disclosed for interfacing computer networks. According to one embodiment of the present invention, a method for interfacing a first network with a second network can comprise receiving an event in the form of a first message from the first network. The first message can be encoded using a first protocol, such as MBI or Diameter protocol. The first message can be translated into a second message, where the second message is encoded using a second protocol, such as Portal Communication Protocol. The second message can be transmitted to the second network. If a response is not received from the second network within a configurable interval, the event can be processed based upon at least one rule that is responsive to the event. A third message can then be transmitted to the first network, where the third message is responsive to the first message and is encoded in the first protocol.


According to another embodiment of the present invention, the method can further comprise maintaining at least two simultaneous connections to the second network.


According to yet another embodiment of the present invention, the method can further comprise determining which of the at least two simultaneous connections to use in transmitting the second message to the second network.


According to one aspect of the present invention, a machine-readable medium can have stored thereof a series of instructions which, when executed by a processor, cause the processor to interface a first network with a second network by receiving an event in the form of a first message from the first network. The first message can be encoded using a first protocol, such as MBI or Diameter protocol. The first message can be translated into a second message, where the second message is encoded using a second protocol, such as Portal Communication Protocol. The second message can be transmitted to the second network. If a response is not received from the second network within a configurable interval, the event can be processed based upon at least one rule that is responsive to the event. A third message can then be transmitted to the first network, where the third message is responsive to the first message and is encoded in the first protocol.


According to another aspect of the present invention, the machine-readable medium can further comprise instructions to maintain at least two simultaneous connections to the second network.


According to yet another aspect of the present invention, the machine-readable medium can further comprise instructions to determine which of the at least two simultaneous connections to use in transmitting the second message to the second network.


According to still another aspect of the present invention, the machine-readable medium can further comprise instructions to enable translation of a third protocol.


According to one embodiment of the present invention, an apparatus that interfaces a first network with a second network can include one or more communication interfaces, one or more storage devices, and one or more processors. The one or more processors can be communicatively coupled to the one or more communication interfaces and the one or more storage devices. Furthermore, the one or more processors can be adapted to receive an event in the form of a first message from the first network. The first message can be encoded using a first protocol, such as MBI or Diameter protocol. The one or more processors can be adapted to translate the first message into a second message, where the second message is encoded using a second protocol, such as Portal Communication Protocol, and transmit the second message to the second network. If a response to the second message is not received from the second network within a configurable interval, the one or more processors can be adapted to process the event based on at least one rule that is responsive to the event. Finally, the one or more processors can be adapted to transmit a third message to the first network, where the third message is responsive to the first message and is encoded in the first protocol.


According to another embodiment of the present invention, the apparatus may further include at least two communications interfaces that are in simultaneous communication with the second network.


According to yet another embodiment of the present invention, the one or more processors may be further adapted to determine which of the at least two communication interfaces in simultaneous connection with the second network to use in transmitting the second message to the second network.


According to still another embodiment of the present invention, the apparatus may further include logic stored on the one or more storage devices that allows translation of a third protocol.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram illustrating exemplary components of a network environment in which an embodiment of the present invention may be implemented.



FIG. 2 is a block diagram illustrating exemplary components of another network environment in which an embodiment of the present invention may be implemented.



FIG. 3 is a block diagram illustrating exemplary components of a gateway, in accordance with an embodiment of the present invention.



FIG. 4 is a block diagram illustrating the functional architecture of a gateway and a process flow for processing an event in the gateway, in accordance with an embodiment of the present invention.



FIG. 5 is a flowchart illustrating the steps performed in processing an event in a gateway, in accordance with an embodiment of the present invention.





DETAILED DESCRIPTION OF THE INVENTION

In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form.


Embodiments of the present invention relate to a gateway that can provide (1) low latency in relaying events and responses between source and destination networks; and (2) high availability of a destination network. Thus, various embodiments of the present invention may be particularly useful in a real time event processing system (although they may not be limited thereto). According to one embodiment, a gateway can implement a “time-out” monitoring facility wherein an incoming event is timed from the moment it is received from a source network. If a response to the event is not received from the destination network within a configurable interval (indicating a “time-out”), the gateway can process the event based on one or more business rules that are responsive to the event. The gateway can then transmit a response to the original source network. In this fashion, the source network can be provided a low-latency response to time-sensitive events. In some embodiments, the timed-out event is discarded after it is processed locally by the gateway. In other embodiments, the timed-out event can be written to a local or networked storage device (such as Random Access Memory (RAM), flash memory, hard disk, optical disk, or the like) for later communication to the destination network.


According to another embodiment, a gateway can maintain multiple simultaneous connections to a destination network to ensure high availability. For example, if one network connection fails, the gateway can use the other connection to transmit a message to the destination. The gateway can also determine which of the simultaneous connections to use based on one or more factors such as their relative speed.


According to yet another embodiment, a gateway can be upgraded in a modular fashion to support new network protocols or new types of events. For example, a set of one or more “plug-ins” may be provided that correspond to additional translation schemes or business logic components. In such case, upgrading may comprise adding the one or more plug-ins to the gateway. Thus, if a service provider introduces a service that requires the processing of a new authorization event, the gateway can be easily modified to process the new event in the case of a time-out.



FIG. 1 is a block diagram illustrating exemplary components of a network environment in which an embodiment of the present invention may be implemented. In this simple example, gateway 110 interfaces networks 100 and 120, which include network nodes 101, 102, 103, 121, 122, and 123. Nodes 101, 102, 103, 121, 122, and 123 may represent any type of computer, computing device, or electronic device capable of communicating via a network, such as a general purpose personal computer, cell phone, PDA, and/or workstation computer. Although networks 100 and 120 are depicted as comprising three nodes each, any number of nodes may be present. For example, networks 100 and 120 may each comprise one computing system.


With respect to network protocols, networks 100 and 120 may represent any type of telecommunications or computer network utilizing any type of communications protocol. Such protocols may include, but are not limited to, HP OpenCall's MBI, Diameter, Radius, FLIST, TCP/IP, IPX/SPX, HTTP, IMAP, SMNP, and Portal Communication Protocol. In operation, gateway 110 translates network messages from the protocol language of network 110 to the protocol language of network 120, and vice versa. As mentioned previously, gateway 110 may also perform additional tasks based on the content of the network messages (e.g. business events) in order to ensure a minimum quality of service between networks 100 and 120. Such details are discussed in further detail below.



FIG. 2 is a block diagram illustrating exemplary components of another network environment in which an embodiment of the present invention may be implemented. This example shows an embodiment of the present invention in the context of a service provider's event processing system. Thus, network 100 represents a “satellite” network and network 120 represents a “management” network. As in FIG. 1, gateway 110 interconnects networks 100 and 120.


Satellite network 110 can log subscriber actions (such as wireless or wired phone calls) and send records of those actions to a service control point 200. In turn, service control point 200 generates business events (such as authorization, authentication, rating, or account update events) and transmits those events to management network 120, via gateway 110, for processing.


Management network 120 can process the events received from service control point 200 at processing node 210 or 211. A record of the event can then be stored in database 220, and a response can be transmitted, via gateway 110, back to network 100. Processing nodes 210 and 211 may be computing systems such as general purpose computers, specialized server computers (including, but not limited to, PC servers, UNIX servers, mid-range servers, mainframe computers, or rack-mounted servers), server farms, server clusters, or the like. Alternatively, nodes 210 and 211 may be separate processors within a single computing platform, or even separate processes or threads within a single processor. Database 220 may be any type of data repository or any combination of data repositories.


As shown in FIG. 2, gateway 110 may simultaneously connect to at least two distinct processing nodes 210 and 211 in network 120. This enables gateway 110 to continue to transmit messages to network 120 in the event that a network connection, or a processing node, fails. In other embodiments, gateway 110 may utilize two or more simultaneous connections to a single processing node, or a single connection to multiple processing nodes.


Here, satellite network 100 may be a cellular satellite network, and management network 120 may be a billing and access management system. A cellular phone user-connected to network 100 can place a long-distance phone call, which generates an authorization request at service control point 200. The authorization request is forward to gateway 110, where it is translated into the protocol of system 120, and then transmitted to a processing node 210 or 211. The processing node can authorize or deny the request, and then transmit this response back through gateway 110 to cellular satellite network 100.



FIG. 3 is a block diagram illustrating the components of exemplary gateway 110, in accordance with an embodiment of the present invention. In this example, gateway 110 includes one or more communication interfaces 300, 330, and 340, one or more processors 310 (e.g., CPUs), and one or more storage devices 320. Processor 310 is communicatively coupled to communications interfaces 300, 330, and 340 and storage device 320, and is responsible for executing the core functions of gateway 110. For example, among other operations, processor 310 executes the “time-out” monitoring logic previously described to enable low-latency operation of the gateway. The functional components of processor 310 are depicted in further detail in FIG. 4.


Storage device 320 may be a data repository or a combination of data repositories that contains, among other data, translation schemes 321 and 322, and business logic 323. Storage 320 may be physically implemented as RAM, ROM, flash memory, a hard disk, an optical disk, or any other type of local or networked data storage device. Translation schemes 321 and 322 are modular components that enable processor 310 to perform its duty of translating network messages from the protocol language of network 100 to the protocol language of network 120, and vice versa. Storage 320 may also contain additional translation schemes for protocols used by other networks. It is contemplated that translation schemes may be easily added or removed as needed to support different types of networks.


Business logic 323 is another modular component that contains one or more business rules. These business rules are applied when a “time-out” occurs to enable gateway 110 to locally process and respond to an event. For example, component 323 may contain a rule that authorizes subscriber requests for placing a local phone call, but denies subscriber requests for placing long-distance phone calls. As with translation schemes 321 and 322, business logic 323 and other business logic components may be added or removed as needed to support the processing of different types of events.


Communication interfaces 300, 330, and 340 are network “sockets” that connect gateway 110 with networks 100 and 120. In an exemplary embodiment, at least one interface 300 is in communication with a source network (such as network 100), and at least two interfaces 330 and 340 are in communication with a destination network (such as network 120). In other embodiments, alternative configurations may be possible.



FIG. 4 is a block diagram illustrating the functional architecture of a gateway and a process flow for processing an event in the gateway, in accordance with an embodiment of the present invention. Specifically, FIG. 4 illustrates how exemplary gateway 110 implements its “time-out” monitoring facility for incoming events. The functional components depicted here may be implemented in hardware as separate components of processor 310, or separate processors of gateway 110. Alternatively, they may be implemented in software as separate processes or threads in processor 310, or separate processes or threads across multiple processors. Furthermore, the architecture of FIG. 4 may be arranged using different sets of components, different arrangements of components, or any other configuration.


The process begins when dispatcher pipeline 400 receives an event in the form of a network message from communications interface 300. Dispatcher pipeline 400 translates the network message into an internal format according to translation scheme 321 or 322, starts a timer, and relays the translated message to router pipeline 410. Router pipeline 410 can evaluate the type business event encapsulated in the message and send the message to the correct processing pipeline 420 or 421. For example, processing pipeline 420 may be adapted to processing authorization events, and processing pipeline 421 may be adapted to processing accounting events. Thus, if the current event is an accounting event, router pipeline 410 will forward the message to processing pipeline 421. In some embodiments, there may be only a single processing pipeline 420 that handles all types of events. In that case, dispatcher pipeline 400 may pass the message directly to the single processing pipeline, thereby bypassing a routing stage. In other embodiments, there may be more than two processing pipelines.


Processing pipeline 420 or 421 translates the message into the protocol language of the destination network and obtains a connection from connection pool 430. As shown, connection pool 430 maintains at least two simultaneous connections 431 and 432 to the destination network. Processing pipelines 420 and 421 may implement logic that selects a connection based on one or more factors, such as their relative load or speed. Or the processing pipelines may select the first connection that is not in use by another processing pipeline. In either case, processing pipeline 420 or 421 transmits the translated message through connection 431 or 432 to the destination network.


If the processing pipeline receives a response to the message/event from the destination network before the timer reaches a threshold value, the response is forwarded to output pipeline 460. Output pipeline 460 translates the response into the protocol language of the source network and then transmits it to the source network through communications interface 300.


If the processing pipeline does not receive a response to the message/event from the destination network before the timer reaches the threshold value (indicating the destination is unavailable or operating too slowly), the message is forwarded to timeout pipeline 440 for local processing. Specifically, timeout pipeline 440 evaluates the business event encapsulated in the message, applies one or more business rules from business logic 323 that are responsive to the event, and formulates a response. Timeout pipeline then forwards this response to output pipeline 460 for transmittal to the source network. In various embodiments, processing pipelines 420 and 421 may also send a timed-out event to exception pipeline 450. This pipeline writes the timed-out event to storage 320 so that it can be re-communicated to the destination network once it becomes available.



FIG. 5 is a flowchart illustrating a exemplary flow of an event in a gateway, in accordance with an embodiment of the present invention. This chart is a more abstract representation of the process depicted in FIG. 4, and illustrates that the same “time-out” functionality may be implemented in a gateway that does not utilize the same or similar functional pipelines. At step 500, the gateway receives an event encoded as a network message in the protocol language of the source network. In the context of a service provider's event processing system, this event may be, for example, an authorization, authentication, rating, or account update event. At step 502, the event is translated into an internal event data record, or EDR, so that it may be subsequently processed in the gateway without performing multiple, unnecessary translations. In an alternative embodiment, the incoming message may be directly translated into the protocol language of the destination network.


Once the message has been translated, a timer is started at step 504, and the EDR is translated and sent to the destination network at step 506.


At states 508 and 510, the process operates in a loop until either a response is received from the destination network, or the timer has reached its threshold. As mentioned previously, the threshold is configurable. In some embodiments, the threshold value may be responsive to the event that is being processed. For example, the acceptable latency for an authorization request may be configured to be lower than the acceptable latency of an account balance inquiry request. In other embodiments, the threshold value may be the same for all types of events.


If a response is received before time expires, the response is translated from the destination to the source protocol at step 512 and transmitted to the source network at step 514. If a response is not received before time expires, the event is locally handled by the gateway. First, the EDR is processed based on one or more business rules responsive to the corresponding business event (step 516). Second, the EDR is optionally stored to a local or networked storage device as part of an exception handling routine (step 518). Third, a response is generated and translated into the protocol language of the source network (step 520). And finally, the response is transmitted to the source network (step 522).


In the foregoing description, for the purposes of illustration, methods were described in a particular order. It should be appreciated that in alternate embodiments, the methods may be performed in a different order than that described. It should also be appreciated that the methods described above may be performed by hardware components or may be embodied in sequences of machine-executable instructions, which may be used to cause a machine, such as a general-purpose or special-purpose processor or logic circuits programmed with the instructions to perform the methods. These machine-executable instructions may be stored on one or more machine-readable mediums, such as CD-ROMs or other type of optical disks, floppy diskettes, ROMs, RAMs, EPROMs, EEPROMs, magnetic or optical cards, flash memory, or other types of machine-readable mediums suitable for storing electronic instructions. Alternatively, the methods may be performed by a combination of hardware and software.


While illustrative and presently preferred embodiments of the invention have been described in detail herein, it is to be understood that the inventive concepts may be otherwise variously embodied and employed, and that the appended claims are intended to be construed to include such variations, except as limited by the prior art.

Claims
  • 1. A method for interfacing a first network using a first protocol with a second network using a second protocol, comprising: receiving, by a device, an event in the form of a first message from the first network, wherein the first message is encoded using the first protocol;translating, by the device, the first message into a second message, wherein the second message is encoded using the second protocol;transmitting, by the device, the second message to the second network;if a response to the second message is received from the second network within a configurable interval, transmitting, by the device, a third message to the first network, wherein the third message is based on the response to the second message received from the second network, and wherein the third message is encoded in the first protocol; andif a response to the second message is not received from the second network within a configurable interval: processing, by the device, the event based on at least one rule that is responsive to the event; andtransmitting, by the device, a fourth message to the first network, wherein the fourth message is based on the processing of the event performed by the device, and wherein the fourth message is encoded in the first protocol.
  • 2. The method of claim 1, further comprising maintaining at least two simultaneous connections to the second network.
  • 3. The method of claim 2, further comprising determining which of the at least two simultaneous connections to use in transmitting the second message to the second network.
  • 4. The method of claim 1, wherein the event is an authentication event.
  • 5. The method of claim 1, wherein the event is an authorization event.
  • 6. The method of claim 1, wherein the event is a rating event.
  • 7. The method of claim 1, wherein the event is an account balance update event.
  • 8. A non-transitory machine-readable medium having stored thereon a series of instructions which, when executed by a processor, cause the processor to interface a first network using a first protocol with a second network using a second protocol by: receiving an event in the form of a first message from the first network, wherein the first message is encoded using the first protocol;translating the first message into a second message, wherein the second message is encoded using the second protocol;transmitting the second message to the second network;if a response to the second message is received from the second network within a configurable interval, transmitting a third message to the first network, wherein the third message is based on the response to the second message received from the second network, and wherein the third message is encoded in the first protocol; andif a response to the second message is not received from the second network within a configurable interval: processing the event based on at least one rule that is responsive to the event; andtransmitting a fourth message to the first network, wherein the fourth message is based on the processing of the event performed by the processor, and wherein the fourth message is encoded in the first protocol.
  • 9. The non-transitory machine-readable medium of claim 8, further comprising instructions which, when executed by the processor, cause the processor to maintain at least two simultaneous connections to the second network.
  • 10. The non-transitory machine-readable medium of claim 9, further comprising instructions which, when executed by the processor, cause the processor to determine which of the at least two simultaneous connections to use in transmitting the second message to the second network.
  • 11. The non-transitory machine-readable medium of claim 8, further comprising instructions which, when executed by the processor, enable translation of a third protocol.
  • 12. The non-transitory machine-readable medium of claim 8, wherein the event is an authentication event.
  • 13. The non-transitory machine-readable medium of claim 8, wherein the event is an authorization event.
  • 14. The non-transitory machine-readable medium of claim 8, wherein the event is a rating event.
  • 15. The non-transitory machine-readable medium of claim 8, wherein the event is an account balance update event.
  • 16. An apparatus that interfaces a first network using a first protocol with a second network using a second protocol, comprising: one or more communication interfaces;one or more storage devices; andone or more processors in communication with the one or more communication interfaces and the one or more storage devices, the processor being configured to: receive an event in the form of a first message from the first network, wherein the first message is encoded using the first protocol;translate the first message into a second message, wherein the second message is encoded using the second protocol;transmit the second message to the second network;if a response to the second message is received from the second network within a configurable interval, transmit a third message to the first network, wherein the third message is based on the response to the second message received from the second network, and wherein the third message is encoded in the first protocol; andif a response to the second message is not received from the second network within a configurable interval: process the event based on at least one rule that is responsive to the event; andtransmit a fourth message to the first network, wherein the fourth message is based on the processing of the event performed by the processor, and wherein the fourth message is encoded in the first protocol.
  • 17. The apparatus of claim 16, wherein at least two of the communications interfaces are in simultaneous communication with the second network.
  • 18. The apparatus of claim 17, wherein the processor is further configured to determine which of the at least two communication interfaces in simultaneous connection with the second network to use in transmitting the second message to the second network.
  • 19. The apparatus of claim 16, further comprising logic stored on the one or more storage devices that allows translation of a third protocol.
  • 20. The apparatus of claim 16, wherein the event is an authentication event.
  • 21. The apparatus of claim 16, wherein the event is an authorization event.
  • 22. The apparatus of claim 16, wherein the event is a rating event.
  • 23. The apparatus of claim 16, wherein the event is an account balance update event.
CROSS-REFERENCES TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Application No. 60/737,429, filed Nov. 15, 2005 by Rockel et al. and entitled “Revenue Management Systems and Method,” which is incorporated herein by reference.

US Referenced Citations (249)
Number Name Date Kind
4430530 Kandell et al. Feb 1984 A
4831582 Miller et al. May 1989 A
4849884 Axelrod et al. Jul 1989 A
4868743 Nishio Sep 1989 A
4918593 Huber Apr 1990 A
4968873 Dethloff et al. Nov 1990 A
5006978 Neches Apr 1991 A
5010485 Bigari Apr 1991 A
5036389 Morales Jul 1991 A
5043872 Cheng et al. Aug 1991 A
5163148 Walls Nov 1992 A
5212787 Baker et al. May 1993 A
5220501 Lawlor et al. Jun 1993 A
5224034 Katz et al. Jun 1993 A
5241670 Eastridge et al. Aug 1993 A
5291583 Bapat Mar 1994 A
5295256 Bapat Mar 1994 A
5313664 Sugiyama et al. May 1994 A
5386413 McAuley et al. Jan 1995 A
5426780 Gerull et al. Jun 1995 A
5448623 Wiedeman et al. Sep 1995 A
5448727 Annevelink Sep 1995 A
5450477 Amarant et al. Sep 1995 A
5452451 Akizawa et al. Sep 1995 A
5469497 Pierce et al. Nov 1995 A
5475585 Bush Dec 1995 A
5475838 Fehskens et al. Dec 1995 A
5483445 Pickering Jan 1996 A
5495609 Scott Feb 1996 A
5499371 Henninger et al. Mar 1996 A
5504885 Alashqur Apr 1996 A
5506966 Ban Apr 1996 A
5517555 Amadon et al. May 1996 A
5523942 Tyler et al. Jun 1996 A
5530853 Schell et al. Jun 1996 A
5544302 Nguyen Aug 1996 A
5548749 Kroenke et al. Aug 1996 A
5555444 Diekelman Sep 1996 A
5560005 Hoover et al. Sep 1996 A
5579375 Ginter Nov 1996 A
5590395 Diekelman et al. Dec 1996 A
5613012 Hoffman et al. Mar 1997 A
5615109 Eder Mar 1997 A
5615249 Solondz Mar 1997 A
5615362 Jensen et al. Mar 1997 A
5627979 Chang et al. May 1997 A
5644736 Healy et al. Jul 1997 A
5649118 Carlisle et al. Jul 1997 A
5666648 Stuart Sep 1997 A
5677945 Mullins et al. Oct 1997 A
5684965 Pickering Nov 1997 A
5694598 Durand et al. Dec 1997 A
5706516 Chang et al. Jan 1998 A
5717924 Kawai Feb 1998 A
5732400 Mandler et al. Mar 1998 A
5737414 Walker et al. Apr 1998 A
5745754 Lagarde et al. Apr 1998 A
5765159 Srinivasan Jun 1998 A
5778189 Kimura et al. Jul 1998 A
5797137 Golshani et al. Aug 1998 A
5799072 Vulcan et al. Aug 1998 A
5799087 Rosen Aug 1998 A
5806061 Chaudhuri et al. Sep 1998 A
5809503 Aoshima Sep 1998 A
5815807 Osmani et al. Sep 1998 A
5822747 Graefe et al. Oct 1998 A
5832068 Smith Nov 1998 A
5842220 De Groot et al. Nov 1998 A
5845206 Castiel et al. Dec 1998 A
5845274 Chadha et al. Dec 1998 A
5850544 Parvathaneny et al. Dec 1998 A
5852820 Burrows Dec 1998 A
5854835 Montgomery et al. Dec 1998 A
5864845 Voorhees et al. Jan 1999 A
5870473 Boesch et al. Feb 1999 A
5870724 Lawlor et al. Feb 1999 A
5873093 Williamson et al. Feb 1999 A
5875435 Brown Feb 1999 A
5883584 Langemann et al. Mar 1999 A
5884290 Smorodinsky et al. Mar 1999 A
5893108 Srinivasan et al. Apr 1999 A
5898762 Katz Apr 1999 A
5909440 Ferguson et al. Jun 1999 A
5913164 Pawa et al. Jun 1999 A
5915253 Christiansen Jun 1999 A
5920629 Rosen Jul 1999 A
5924094 Sutter Jul 1999 A
5937406 Balabine et al. Aug 1999 A
5960416 Block Sep 1999 A
5963648 Rosen Oct 1999 A
5966649 Gulliford et al. Oct 1999 A
5970417 Toyryla et al. Oct 1999 A
5974407 Sacks Oct 1999 A
5974441 Rogers et al. Oct 1999 A
5974506 Sicola et al. Oct 1999 A
5983223 Perlman Nov 1999 A
5987233 Humphrey Nov 1999 A
6005926 Mashinsky Dec 1999 A
6011795 Varghese et al. Jan 2000 A
6012057 Mayer et al. Jan 2000 A
6016341 Lim Jan 2000 A
6021409 Burrows Feb 2000 A
6035326 Miles et al. Mar 2000 A
6047067 Rosen Apr 2000 A
6047267 Owens et al. Apr 2000 A
6047284 Owens et al. Apr 2000 A
6058173 Penfield et al. May 2000 A
6058375 Park May 2000 A
6061679 Bournas et al. May 2000 A
6061763 Rubin et al. May 2000 A
6067574 Tzeng May 2000 A
6070051 Astrom et al. May 2000 A
6075796 Katseff et al. Jun 2000 A
6078897 Rubin et al. Jun 2000 A
6092055 Owens et al. Jul 2000 A
6112190 Fletcher et al. Aug 2000 A
6112304 Clawson Aug 2000 A
6141759 Braddy Oct 2000 A
6154765 Hart Nov 2000 A
6170014 Darago et al. Jan 2001 B1
6185225 Proctor Feb 2001 B1
6185557 Liu Feb 2001 B1
6223172 Hunter et al. Apr 2001 B1
6236972 Shkedy May 2001 B1
6236988 Aldred May 2001 B1
6243760 Armbruster et al. Jun 2001 B1
6266660 Liu et al. Jul 2001 B1
6311185 Markowitz et al. Oct 2001 B1
6311186 MeLampy et al. Oct 2001 B1
6314365 Smith Nov 2001 B1
6321205 Eder Nov 2001 B1
6336135 Niblett et al. Jan 2002 B1
6341272 Randle Jan 2002 B1
6347340 Coelho et al. Feb 2002 B1
6351778 Orton et al. Feb 2002 B1
6356897 Gusack Mar 2002 B1
6377938 Block et al. Apr 2002 B1
6377957 Jeyaraman Apr 2002 B1
6381228 Prieto et al. Apr 2002 B1
6381605 Kothuri et al. Apr 2002 B1
6381607 Wu et al. Apr 2002 B1
6400729 Shimadoi et al. Jun 2002 B1
6400925 Tirabassi et al. Jun 2002 B1
6401098 Moulin Jun 2002 B1
6415323 McCanne et al. Jul 2002 B1
6427172 Thacker et al. Jul 2002 B1
6429812 Hoffberg Aug 2002 B1
6442652 Laboy et al. Aug 2002 B1
6446068 Kortge Sep 2002 B1
6477651 Teal Nov 2002 B1
6481752 DeJoseph Nov 2002 B1
6490592 St. Denis et al. Dec 2002 B1
6494367 Zacharias Dec 2002 B1
6515968 Combar et al. Feb 2003 B1
6529915 Owens et al. Mar 2003 B1
6532283 Ingram Mar 2003 B1
6553336 Johnson et al. Apr 2003 B1
6563800 Salo et al. May 2003 B1
6564047 Steele et al. May 2003 B1
6564247 Todorov May 2003 B1
6567408 Li et al. May 2003 B1
6658415 Brown et al. Dec 2003 B1
6658463 Dillon et al. Dec 2003 B1
6662180 Aref et al. Dec 2003 B1
6662184 Friedberg Dec 2003 B1
6678675 Rothrock Jan 2004 B1
6700869 Falco et al. Mar 2004 B1
6725052 Raith Apr 2004 B1
6735631 Oehrke et al. May 2004 B1
6779030 Dugan et al. Aug 2004 B1
6819933 Tirabassi et al. Nov 2004 B1
6885734 Eberle et al. Apr 2005 B1
6901507 Wishneusky May 2005 B2
6907429 Carneal et al. Jun 2005 B2
6947440 Chatterjee et al. Sep 2005 B2
6950867 Strohwig et al. Sep 2005 B1
6963636 Kunugi et al. Nov 2005 B1
6973057 Forslow Dec 2005 B1
7003280 Pelaez et al. Feb 2006 B2
7089262 Owens et al. Aug 2006 B2
7181537 Costa-Requena et al. Feb 2007 B2
7233918 Ye et al. Jun 2007 B1
7239689 Diomelli Jul 2007 B2
7246102 McDaniel et al. Jul 2007 B2
7257611 Shankar et al. Aug 2007 B1
7391784 Renkel Jun 2008 B1
7395262 Rothrock Jul 2008 B1
7406471 Shankar et al. Jul 2008 B1
7729925 Maritzen et al. Jun 2010 B2
7756763 Owens et al. Jul 2010 B1
7792714 Mills et al. Sep 2010 B1
7809768 Owens et al. Oct 2010 B2
20010005372 Cave et al. Jun 2001 A1
20010025273 Walker et al. Sep 2001 A1
20010034704 Farhat et al. Oct 2001 A1
20010040887 Shtivelman et al. Nov 2001 A1
20010056362 Hanagan et al. Dec 2001 A1
20020059163 Smith May 2002 A1
20020073082 Duvillier et al. Jun 2002 A1
20020078063 Minder Jun 2002 A1
20020082881 Price et al. Jun 2002 A1
20020087469 Ganesan et al. Jul 2002 A1
20020106064 Bekkevold et al. Aug 2002 A1
20030014361 Klatt et al. Jan 2003 A1
20030014656 Ault et al. Jan 2003 A1
20030016795 Brandenberger Jan 2003 A1
20030097547 Wishneusky May 2003 A1
20030105799 Khan et al. Jun 2003 A1
20030118039 Nishi et al. Jun 2003 A1
20030133552 Pillai et al. Jul 2003 A1
20030172145 Nguyen Sep 2003 A1
20030202521 Havinis et al. Oct 2003 A1
20040002918 McCarthy et al. Jan 2004 A1
20040018829 Raman et al. Jan 2004 A1
20040132427 Lee et al. Jul 2004 A1
20040153407 Clubb et al. Aug 2004 A1
20050018689 Chudoba Jan 2005 A1
20050033847 Roy Feb 2005 A1
20050036487 Srikrishna Feb 2005 A1
20050065880 Amato et al. Mar 2005 A1
20050075957 Pincus et al. Apr 2005 A1
20050091156 Hailwood et al. Apr 2005 A1
20050107066 Erskine et al. May 2005 A1
20050113062 Pelaez et al. May 2005 A1
20050120350 Ni et al. Jun 2005 A1
20050125305 Benco et al. Jun 2005 A1
20050187841 Grear et al. Aug 2005 A1
20050238154 Heaton et al. Oct 2005 A1
20060010057 Bradway et al. Jan 2006 A1
20060025131 Adamany et al. Feb 2006 A1
20060035637 Westman Feb 2006 A1
20060045250 Cai et al. Mar 2006 A1
20060056607 Halkosaari Mar 2006 A1
20060114932 Cai et al. Jun 2006 A1
20060148446 Karlsson Jul 2006 A1
20060168303 Oyama et al. Jul 2006 A1
20060190478 Owens et al. Aug 2006 A1
20060248010 Krishnamoorthy et al. Nov 2006 A1
20070091874 Rockel Apr 2007 A1
20070133575 Cai et al. Jun 2007 A1
20070198283 Labuda Aug 2007 A1
20070288367 Krishnamoorthy et al. Dec 2007 A1
20070288368 Krishnamoorthy et al. Dec 2007 A1
20080033873 Krishnamoorthy et al. Feb 2008 A1
20080033874 Krishnamoorthy et al. Feb 2008 A1
20080040267 Krishnamoorthy et al. Feb 2008 A1
20080126230 Bellora et al. May 2008 A1
20080215474 Graham Sep 2008 A1
20080311883 Bellora et al. Dec 2008 A1
Foreign Referenced Citations (8)
Number Date Country
063402 Oct 1982 EP
WO 9504960 Feb 1995 WO
WO 9527255 Oct 1995 WO
WO 9634350 Oct 1996 WO
WO 9703406 Jan 1997 WO
WO 9852131 Nov 1998 WO
WO 2007002841 Jan 2007 WO
WO 2007016412 Feb 2007 WO
Related Publications (1)
Number Date Country
20070110083 A1 May 2007 US
Provisional Applications (1)
Number Date Country
60737429 Nov 2005 US