High availability for event forwarding

Abstract
High availability event forwarding can be obtained utilizing distributed queues in a server cluster. Each server can receive an event from a data system, such as a database or SAP™ system. Event queues exist on servers in the cluster can store an event until, for example, the event is delivered to a user or retrieved for processing. An event processor examines the load of each event queue and selects the event queue with the lightest load. The event processor generates an alias for the selected queue, such that a user, integration system, or client application does not need to know the identity of the physical queue storing the event, but only needs to refer to the ‘distributed queue’ or alias. After a physical queue is selected and an alias assigned, the event is forwarded to the selected queue.
Description
COPYRIGHT NOTICE

A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document of the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.


CROSS-REFERENCED CASES

The following applications are cross-referenced and incorporated herein by reference:


U.S. patent application Ser. No. 10/271,194, now U.S. Pat. No. 7,080,092, entitled “Application View Component for System Integration,” by Mitch Upton, filed Oct. 15, 2002.


U.S. patent application Ser. No. 10/293,059 entitled “High Availability for Asynchronous Reguest,” by Tim Potter et al., filed Nov. 13, 2002.


U.S. patent application Ser. No. 10/293,655 entitled “High Availability Application View Deployment,” by Tim Potter et al., filed Nov. 13, 2002.


U.S. patent application Ser. No. 10/293,674 entitled “High Availability Event Topic,” by Tim Potter et al., filed Nov. 13, 2002.


FIELD OF THE INVENTION

The present invention relates to the forwarding of events and messages to users in a cluster or across a network.


BACKGROUND

In present application integration (Al) systems, there can be several single points of failure. These single points of failure can include deployment or management facilities, event forwarding, event topics, remote clients, event subscriptions, response listeners, and response queues. Each of these features is tied to a single server within a server cluster. If that single server crashes, the entire Al application can become irreparably damaged and must be rebooted via a server reboot.


An Al component can generate events, such as through the use of adapters, that a user may wish to consume through a service such as business process management (BPM). An event forwarding facility of a present Al system forwards events between an application view and a physical BPM event queue. This facility is a single point of failure as well as a performance bottleneck.


BRIEF SUMMARY

Systems and methods in accordance with the present invention can overcome deficiencies in prior art systems by providing for high availability event forwarding. In a server cluster, each server can receive an event from a data source, such as a database or SAPTM system. An event queue resides on at least one of the servers in the cluster, which is capable of storing an event. An event queue can store an event until, for example, the event is delivered to a user or retrieved for processing.


An event processor exists on at least one of the servers in the cluster. The event processor can examine the load of each event queue in the cluster and determine which event queue has the lightest load. The event processor can generate an alias for the selected queue, such that a user, integration system, or client application, for example, can locate the event by specifying the alias. The user does not need to know the identity of the actual physical queue in which the event is stored, but only refers to the ‘distributed queue’ or alias used to locate the actual physical queue. After the event processor selects a physical queue to act as the distributed queue and assigns an alias, the event can be forwarded to that physical queue.


Other features, aspects, and objects of the invention can be obtained from a review of the specification, the figures, and the claims.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a diagram of a system in accordance with one embodiment of the present invention.



FIG. 2 is flowchart for a method that can be used with the system of FIG. 1.





DETAILED DESCRIPTION

A system and method in accordance with one embodiment of the present invention overcomes deficiencies in prior art systems by changing the way in which events are routed throughout an Al system. In present messaging systems, an event router, which can be tightly coupled to an SAP™ system or database, can receive an event out of the SAP™ system or database and send that event into an integration server. The integration server propagates the event out to anybody who is interested in the event, such as anyone having registered a listener for events of that type. Events can also be propagated to subscribers of an event topic to which that event belongs. Event forwarding is one mechanism for propagating these messages. In present systems, events are forwarded by an event router to a physical queue, from which interested users or clients can retrieve the events. This physical queue is a single point of failure.


In a system in accordance with one embodiment of the present invention, event forwarding is highly available. High availability can be accomplished through the use of distributed queues and/or topics. A distributed queue can server as an alias, and is not a physical queue in a specific server. A highly-available approach allows a user to send a message to a distributed queue. A server in the cluster, such as the one receiving the message, can determine which server in the cluster contains the physical queue with the lightest load that is online and working properly.


After determining which physical queue should receive the message, the server can find that physical queue and put the message on the queue. The user can be unaware of which queue is being used, and may not care. To the user, the message is sent to the alias, or distributed queue. This system is similar to a front end, in that it allows a messaging implementation such as JMS to be highly available, without requiring substantial work on the part of a client. When using a distributed event queue for event forwarding, it is possible to rely on the underlying JMS to do a lot of the high availability work.


Event forwarding in accordance with the present invention can be used with multiple event topics, or with a single distributed event topic. An Al system can create a single JMS Topic for each topic subscriber. Events for a given subscriber can be sent to the topic for the subscriber. Event delivery can also be consolidated onto a single JMS Queue, such as EVENT_QUEUE, for example. This queue can be a distributed queue with multiple physical destinations. A message driven bean (MDB), which can be referred to as an ‘Al Event Processor’, can listen on the EVENT_QUEUE distributed destination. An onMessage implementation for the MDB can deliver a copy of the event into the BPM event processor, such as if BPM is installed and running in the server instance.


The onMessage implementation can also publish a copy of the event onto an event topic, or “EVENT_TOPIC”. An event topic is a distributed JMS topic that handles the delivery of events to remote application view clients. An application view class can be modified to create an event context on the event topic. The event context class can be modified to filter messages based on the application view name, which can be stored in a ‘SourceKey’ JMS header property.


The implementation can deliver a copy of the event into an application view Cajun Control event processor, if such a control is being used. Also, any dequeuing or execution for the implementation can be done transactionally to allow the message to be rolled back onto the queue in the event of a processing failure


Using a queue and MDB approach allows exactly one copy of each event to be delivered into a system such as BPM and Cajun, while still using distributed destinations. The use of topics would yield multiple copies if distributed destinations were used. This approach also provides the continued ability to support event delivery to remote application view clients. High availability can be obtained by virtue of the distributed EVENT_QUEUE destination. Multiple servers can participate in the processing of messages for this queue, and thus a single server failure can be accommodated.


This approach also provides for better efficiency, as events can be routed directly to a BPM event processor and application view Cajun Control event processor without requeuing a copy of the message, which can have associated persistence and delivery overhead. A secondary publish to an EVENT_TOPIC can be somewhat costly, but the BPM event processors can be processing the event before the event is sent to the event topic, allowing more direct processing into BPM.



FIG. 1 shows a system that can be used for high-availability event processing in an application integration engine. In an example of event processing, an event occurs in an enterprise information system (EIS) 130. The event data is transferred to an event generator 128 in the resource adapter. The event generator 128 transforms the EIS-specific event data into an XML document and posts an event object, such as an lEvent object, to the event router 126. The event router 126 passes the event object to an event context object 124 for each Al server that is interested in the specific event type. The event context object 124 encapsulates the event object into a JMS object message and sends it to the event queue 122, such as a JMS Queue bound at JNDl context: com.ai.EVENT_QUEUE using a JMS QueueSender. This queue can be a distributed queue, in that the selected queue exists somewhere in the cluster but uses the same alias.


The event object message is stored in the event queue 122 until it is retrieved for processing by the Al event processor 120, which can process events in a first-in-first-out (FIFO) manner. It may not be enough to send a message to a distributed queue and expect the message to be received by a receiver of that distributed queue. There can be a receiver, or “QueueReceiver”, receiving or listening on each physical queue to which an event could be forwarded. Thus, an Al event processor can be deployed on all nodes in a cluster. Multiple event processor deployment can further prevent single points of failure.


The event processor 120 can forward the event to all registered event destinations 110, which in the Figure include a BPM event queue 112, an event topic 114, and a Cajun event processor 116. Event destinations can be added by posting a message to a notification topic 108 for application integration. For example, when an Al plug-in 100 for BPM is deployed, it can send an “addDestination” message to the notification topic to register the BPM event queue 112 as an event destination. The BPM event queue can be a distributed queue. A message published on the notification topic can have cluster-wide visibility. Each node in the cluster can have a singleton event destination manager 118 that is a durable subscriber to this topic. Thus, the message can be published to every event destination manager in the cluster.


The event processor can use a singleton event destination manager 118 to listen for add/remove event destination messages on the notification topic 108 to configure the list of event destinations 110. The event object message can be delivered to all registered event destinations in a single transaction, such as in a single Java™ Transaction API (JTA) user transaction. If a post to any event destination 110 fails, the event message can be rolled back to the distributed queue 122. The roll back can use the same alias, but can forward the event to a different physical queue in the cluster. If the event processor 120 receives a message such as one that has “getJMSRedelivered( )” true, the post can be tried again. If the retry fails, the message can be sent to an error queue, which can be a distributed queue for failed event and asynchronous service response messages.


If an Al plug-in 100 for BPM is deployed, the plug-in can add the BPM event queue 112 as an event destination during startup so that Al events are passed to a BPM workflow 102 for processing. If there are any registered application view event listeners 106, the event can be sent to an event topic 114 which will use event context 104 to establish a connection with the remote event listener 106 for the application view.



FIG. 2 shows the steps of a method that can be used with the system of FIG. 1. An event is generated in a data system, such as a database or SAP™ system 200. An event router receives the event from the data system and forwards it to a server in the cluster 202. The server receiving the event determines which server in the cluster contains the event queue with the lightest load 204. The server then creates an alias for the event queue with the lightest load, which will be used to refer to the distributed event queue containing the event 206. The server then forwards the event to the distributed event queue and assigns the alias 208.


An event context class is a frame of reference that can be used to generate and/or receive events. An event context class can be used by an application view to manage the event delivery mechanics in methods such as postEvent and addEventListener. An application view can represent a subset of business functionality that is available, for example, within an EIS. The application view can accept requests for service invocation from a client, and can invoke the proper system functions within the target EIS. An application view can make use of connections provided by a resource adapter to communicate with the EIS.


A service can be a named business function. An application view can manage mapping from the name of the service to the system function in the EIS. Services can expose a simple XML-based request and response interface. Services can return a document definition object for request and response document types that describe the structure and content required for the document type.


An application view can utilize metadata that includes information such as a service name and associated system function. The metadata can also store at least some of the data needed to successfully invoke the system function. As a result, the service can require less request data from the client invoking service, as the application view can augment the data passed by the client with the stored metadata. This is a convenient way to hide the complexity of the underlying system function invocation from the client invoking a service.


In the event of the crash of a cluster server or managed server, an Al application can continue delivering events from adapters running in nodes that are still available. Event generators or routers running in the failed node can restart when the failed node restarts. Users can be notified that in-flight transactions have been cancelled or rolled-back, and should be retried. Wherever possible, the transaction can be retried after reestablishing connections, in order to make use of resources on another live server. One example of Al reestablishing a connection is the event context as used for sending events to Al from an event router.


In the event of an admin server failure, an Al application can do the tasks listed with respect to the crash of a cluster server. The Al application should still be able to boot and reboot successfully using the previous domain and server configuration.


The use of server clustering allows an Al component, such as an event-forwarding server, event queue, or JMS server, to be used in a scalable and highly available fashion. A highly available component does not have any single points of failure, and can migrate services from failed nodes to live nodes in a cluster. Any service offered by an Al component can be targeted to several nodes in a cluster. In the event of a node failure in the cluster, the services located on the failed node can be migrated to another live node in the cluster.


In the event of a crash of a cluster or managed server, the Al application can continue accepting new work. The acceptance of new work can include the deploying and undeploying of application views and connection factories, monitoring of old application views and connection factories, delivering events from adapters, and servicing both synchronous and asynchronous service invocations. An Al application can also support the manual migration of services on the failed node to a live node, such as a singleton MDB listening on a physical destination managed by a failed JMS server. Application integration can use a singleton MDB, such as if a customer needs ordered event processing.


The foregoing description of preferred embodiments of the present invention has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations will be apparent to one of ordinary skill in the art. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, thereby enabling others skilled in the art to understand the invention for various embodiments and with various modifications that are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the following claims and their equivalence.

Claims
  • 1. A system comprising: a cluster of servers;a database connected to said cluster of servers, wherein an event occurring within the database is propagated by an adapter out to the cluster of servers; anda message store and forward unit residing a server in the cluster of servers, wherein the message store and forward unit receives a message generated by the event occurring within the database system, stores the message and forwards the message to a recipient;wherein the message is stored using a distributed destination that includes multiple physical locations in the cluster associated with a single alias, such that said single alias identifies the distributed destination;wherein the message addressed to the alias is received by said server, such that the server receiving the message determines one of the multiple physical locations in the cluster associated with the alias and causes the message to be stored in said one of the multiple physical locations prior to the message being forwarded;wherein the server examines a load on each of the multiple physical locations and selects said one of the multiple physical locations for storing the message according to said load being examined; andwherein, in case of a failure of said server, the message store and forward unit is migrated to another server of the cluster.
  • 2. The system of claim 1, wherein the store and forward unit includes a message queue.
  • 3. The system of claim 1, wherein the store and forward unit includes a message processor.
  • 4. The system of claim 1, further comprising: a list of registered event destinations for receiving said message, wherein the list of registered event destinations is configured by posting a message on a notification topic.
  • 5. The system of claim 4, further comprising: a message driven bean that listens on the distributed destination and delivers the message to at least one of the list of registered event destinations.
  • 6. The system of claim 5, wherein if delivery of the message fails, the message is rolled back to the distributed destination, such that the roll back uses a same alias but forwards the message to a different physical location in the cluster.
  • 7. The system of claim 1, wherein the message is published to a distributed event topic that handles delivery of events to remote clients.
  • 8. The system of claim 1, wherein the server receiving the message determines the physical location with the lightest load and creates the alias for the physical location and assigns the alias.
  • 9. A method, comprising: maintaining a cluster of servers connected to a database, wherein an event occurring within the database is propagated by an adapter out to the cluster of servers;maintaining a message store and forward unit on at least one of the servers in the cluster;receiving a message generated by the event occurring within the database system by the message store and forward unit on the server, wherein the message store and forward unit stores the message and forwards the message to a recipient;wherein the message is stored using a distributed destination that includes multiple physical destinations in the cluster associated with a single alias, such that said single alias identifies the distributed destination;wherein the message addressed to the alias is received by said server, such that the server receiving the message determines one of the multiple physical locations in the cluster associated with the alias and causes the message to be stored in said one of the multiple physical locations prior to the message being forwarded;wherein the server examines a load of each of the multiple physical destinations and selects said one of the multiple physical locations for storing the message containing according to said load examined;wherein employing the message store and forward unit causes exactly one copy of each message to be delivered while simultaneously using the distributed destination with multiple physical locations to store the message; andwherein, in case of a failure of said server, the message store and forward unit is migrated to another server of the cluster.
  • 10. The method of claim 9, wherein the store and forward unit includes a message queue.
  • 11. The method of claim 9, wherein the store and forward unit includes a message processor.
  • 12. The method of claim 9, further comprising: a list of registered event destinations for receiving said message, wherein the list of registered event destinations is configured by posting a message on a notification topic.
  • 13. The method of claim 12, further comprising: a message driven bean that listens on the distributed destination and delivers the message to at least one of the list of registered event destinations.
  • 14. The method of claim 13, wherein if delivery of the message fails, the message is rolled back to the distributed destination, such that the roll back uses a same alias but forwards the message to a different physical location in the cluster.
  • 15. The method of claim 9, wherein the message is published to a distributed event topic that handles delivery of events to remote clients.
  • 16. The method of claim 9, wherein the server receiving the message determines the physical location with the lightest load and creates the alias for the physical location and assigns the alias.
  • 17. A system comprising: a cluster of servers;a database connected to said cluster of servers, wherein an event occurring within the database is propagated by an adapter out to the cluster of servers; anda message store and forward unit residing on a server in the cluster of servers, wherein the message store and forward unit receives a message generated by the event occurring within the database system, stores the message and forwards the message to a recipient;wherein the message is stored using a distributed destination that includes multiple physical locations in the cluster associated with a single alias, such that said single alias identifies the distributed destination;wherein the message addressed to the alias is received by said server, such that the server receiving the message determines one of the multiple physical locations in the cluster associated with the alias and causes the message to be stored in said one of the multiple physical locations prior to the message being forwarded;wherein employing the message store and forward unit causes exactly one copy of each message to be delivered while simultaneously using the distributed destination with multiple physical locations;wherein, in case of a failure of said server, the message store and forward unit is migrated to another server of the cluster.
CLAIM OF PRIORITY

This application is a continuation of U.S. patent application Ser. No. 11/559,344 filed Nov. 13, 2006, entitled“HIGH AVAILABLILITY FOR EVENT FORWARDING” now abandoned, which is a continuation of U.S. patent application Ser. No. 10/293,656 filed Nov. 13, 2002, now U.S. Pat. No. 7,155,438,issued Dec. 26, 2006, entitled “HIGH AVAILABILITY FOR EVENT FORWARDING”, which claims priority to U.S. Provisional Patent Application No. 60/376,960 filed May 1, 2002, entitled “HIGH AVAILABILITY FOR EVENT FORWARDING,” which is hereby incorporated herein by reference.

US Referenced Citations (149)
Number Name Date Kind
5283897 Georgiadis et al. Feb 1994 A
5469562 Saether Nov 1995 A
5592664 Starkey Jan 1997 A
5604860 McLaughlin et al. Feb 1997 A
5630131 Palevich et al. May 1997 A
5721825 Lawson et al. Feb 1998 A
5892913 Adiga et al. Apr 1999 A
5944794 Okamoto et al. Aug 1999 A
5951694 Choquier et al. Sep 1999 A
5966535 Benedikt et al. Oct 1999 A
5991808 Broder et al. Nov 1999 A
6012083 Savitzky et al. Jan 2000 A
6016495 McKeehan et al. Jan 2000 A
6018730 Nichols et al. Jan 2000 A
6021443 Bracho et al. Feb 2000 A
6023578 Birsan et al. Feb 2000 A
6023722 Colyer Feb 2000 A
6029000 Woolsey et al. Feb 2000 A
6067623 Blakley et al. May 2000 A
6070184 Blount et al. May 2000 A
6078943 Yu Jun 2000 A
6119143 Dias et al. Sep 2000 A
6128279 O'Neil et al. Oct 2000 A
6148336 Thomas et al. Nov 2000 A
6185734 Saboff et al. Feb 2001 B1
6195680 Goldszmidt et al. Feb 2001 B1
6212546 Starkovich et al. Apr 2001 B1
6230309 Turner et al. May 2001 B1
6233607 Taylor et al. May 2001 B1
6237135 Timbol May 2001 B1
6243737 Flanagan et al. Jun 2001 B1
6253230 Couland et al. Jun 2001 B1
6311327 O'Brien et al. Oct 2001 B1
6317786 Yamane et al. Nov 2001 B1
6330602 Law et al. Dec 2001 B1
6334114 Jacobs et al. Dec 2001 B1
6360358 Elsbree et al. Mar 2002 B1
6367068 Vaidyanathan et al. Apr 2002 B1
6374297 Wolf et al. Apr 2002 B1
6377939 Young Apr 2002 B1
6470364 Prinzing Oct 2002 B1
6516322 Meredith Feb 2003 B1
6584454 Hummel et al. Jun 2003 B1
6587959 Sjolander et al. Jul 2003 B1
6594786 Connelly et al. Jul 2003 B1
6601113 Koistinen et al. Jul 2003 B1
6609115 Mehring et al. Aug 2003 B1
6615258 Barry et al. Sep 2003 B1
6636491 Kari et al. Oct 2003 B1
6637020 Hammond Oct 2003 B1
6643652 Helgeson et al. Nov 2003 B2
6654932 Bahrs et al. Nov 2003 B1
6662357 Bowman-Amuah Dec 2003 B1
6684388 Gupta et al. Jan 2004 B1
6687702 Vaitheeswaran et al. Feb 2004 B2
6721740 Skinner et al. Apr 2004 B1
6721779 Maffeis Apr 2004 B1
6754181 Elliott et al. Jun 2004 B1
6754884 Lucas et al. Jun 2004 B1
6789054 Makhlouf Sep 2004 B1
6799718 Chan et al. Oct 2004 B2
6823495 Vedula et al. Nov 2004 B1
6826260 Vincze et al. Nov 2004 B1
6832238 Sharma et al. Dec 2004 B1
6836883 Abrams et al. Dec 2004 B1
6859180 Rivera Feb 2005 B1
6859834 Arora et al. Feb 2005 B1
6874143 Murray et al. Mar 2005 B1
6910154 Schoenthal Jun 2005 B1
6918084 Slaughter et al. Jul 2005 B1
6922827 Vasilik et al. Jul 2005 B2
6950872 Todd, II Sep 2005 B2
6971096 Ankireddipally et al. Nov 2005 B1
7000219 Barrett et al. Feb 2006 B2
7017146 Dellarocas et al. Mar 2006 B2
7043722 Bau, III May 2006 B2
7051072 Stewart et al. May 2006 B2
7051316 Charisius et al. May 2006 B2
7062718 Kodosky et al. Jun 2006 B2
7069507 Alcazar et al. Jun 2006 B1
7072934 Helgeson et al. Jul 2006 B2
7073167 Iwashita Jul 2006 B2
7080092 Upton Jul 2006 B2
7089584 Sharma Aug 2006 B1
7096422 Rothschiller et al. Aug 2006 B2
7111243 Ballard et al. Sep 2006 B1
7117504 Smith et al. Oct 2006 B2
7127507 Clark et al. Oct 2006 B1
7143186 Stewart et al. Nov 2006 B2
7146422 Marlatt et al. Dec 2006 B1
7155705 Hershberg et al. Dec 2006 B1
7165041 Guheen et al. Jan 2007 B1
7181731 Pace et al. Feb 2007 B2
7184967 Mital et al. Feb 2007 B1
7240331 Vion-Dury et al. Jul 2007 B2
7260599 Bauch et al. Aug 2007 B2
7260818 Iterum et al. Aug 2007 B1
20020010781 Tuatini Jan 2002 A1
20020032769 Barkai et al. Mar 2002 A1
20020049788 Lipkin et al. Apr 2002 A1
20020073396 Crupi et al. Jun 2002 A1
20020083075 Brummel et al. Jun 2002 A1
20020099579 Stowell et al. Jul 2002 A1
20020111922 Young et al. Aug 2002 A1
20020116454 Dyla et al. Aug 2002 A1
20020120685 Srivastava et al. Aug 2002 A1
20020143960 Goren et al. Oct 2002 A1
20020161826 Arteaga et al. Oct 2002 A1
20020169644 Greene Nov 2002 A1
20020174178 Stawikowski Nov 2002 A1
20020174241 Beged-Dov et al. Nov 2002 A1
20020184610 Chong et al. Dec 2002 A1
20020188486 Gil et al. Dec 2002 A1
20020194244 Raventos Dec 2002 A1
20020194267 Flesner et al. Dec 2002 A1
20020194495 Gladstone et al. Dec 2002 A1
20030004746 Kheirolomoom et al. Jan 2003 A1
20030005181 Bau et al. Jan 2003 A1
20030009511 Giotta et al. Jan 2003 A1
20030018661 Darugar Jan 2003 A1
20030018832 Amirisetty et al. Jan 2003 A1
20030018963 Ashworth et al. Jan 2003 A1
20030023957 Bau et al. Jan 2003 A1
20030028364 Chan et al. Feb 2003 A1
20030028579 Kulkarni et al. Feb 2003 A1
20030043191 Tinsley et al. Mar 2003 A1
20030046591 Asghari-Kamrani et al. Mar 2003 A1
20030051066 Pace et al. Mar 2003 A1
20030055868 Fletcher et al. Mar 2003 A1
20030055878 Fletcher et al. Mar 2003 A1
20030074217 Beisiegel et al. Apr 2003 A1
20030079029 Garimella et al. Apr 2003 A1
20030084203 Yoshida et al. May 2003 A1
20030105805 Jorgenson Jun 2003 A1
20030126136 Omoigui Jul 2003 A1
20030220967 Potter et al. Nov 2003 A1
20040015368 Potter et al. Jan 2004 A1
20040040011 Bosworth et al. Feb 2004 A1
20040078373 Ghoneimy et al. Apr 2004 A1
20040078440 Potter et al. Apr 2004 A1
20040103406 Patel May 2004 A1
20040148336 Hubbard et al. Jul 2004 A1
20040194087 Brock et al. Sep 2004 A1
20040204976 Oyama et al. Oct 2004 A1
20050050068 Vaschillo et al. Mar 2005 A1
20050278585 Spencer Dec 2005 A1
20060206856 Breeden et al. Sep 2006 A1
20060234678 Juitt et al. Oct 2006 A1
20070038500 Hammitt et al. Feb 2007 A1
Foreign Referenced Citations (3)
Number Date Country
2248634 Mar 2000 CA
WO9923558 May 1999 WO
WO 0029924 May 2000 WO
Related Publications (1)
Number Date Country
20070156884 A1 Jul 2007 US
Provisional Applications (1)
Number Date Country
60376960 May 2002 US
Continuations (2)
Number Date Country
Parent 11559344 Nov 2006 US
Child 11685169 US
Parent 10293656 Nov 2002 US
Child 11559344 US