System and method for providing highly available processing of asynchronous service requests

Information

  • Patent Grant
  • 7222148
  • Patent Number
    7,222,148
  • Date Filed
    Wednesday, November 13, 2002
    22 years ago
  • Date Issued
    Tuesday, May 22, 2007
    17 years ago
Abstract
Highly-available processing of an asynchronous request can be accomplished in a single transaction. A distributed request queue receives a service request from a client application or application view client. A service processor is deployed on each node of a cluster containing the distributed request queue. A service processor pulls the service request from the request queue and invokes the service for the request, such as to an enterprise information system. If that service processor fails, another service processor in the cluster can service the request. The service processor receives a service response from the invoked service and forwards the service response to a distributed response queue. The distributed response queue holds the service response until the response is retrieved for the client application.
Description
COPYRIGHT NOTICE

A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document of the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.


CROSS-REFERENCED CASES

The following applications are cross-referenced and incorporated herein by reference:


U.S. patent application Ser. No. 10/271,194 entitled “Application View Component for System Integration,” by Mitch Upton, filed Oct. 15, 2002.


U.S. patent application Ser. No. 10/293,674 entitled “High Availability Event Topic,” by Tim potter et al., filed Nov. 13, 2002.


U.S. patent application Ser. No. 10/293,655 entitled “High Availability Application View Deployment,: by Tim Potter et al., filed Nov. 13, 2002.


U.S. patent application Ser. No. 10/293,656 entitled “High Availability for Event Forwarding,” by Tim Potter et al., filed Nov. 13, 2002.


FIELD OF THE INVENTION

The present invention relates to the availability of services such as JMS across a network or in a server cluster.


BACKGROUND

In present application integration (AI) systems, there can be several single points of failure. These single points of failure can include deployment or management facilities, event forwarding, event topics, remote clients, event subscriptions, response listeners, and response queues. Each of these features is tied to a single server within a server cluster. If that single server crashes, the entire AI application can become irreparably damaged and must be rebooted via a server reboot.


Single points of failure such as request and response queue are used for processing asynchronous requests. Current implementations of asynchronous service request processing utilize a single physical request queue and response queue per server instance. In the event of a node failure, all asynchronous requests and responses within a given JMS server, for example, become unavailable until the JMS server is restarted.


BRIEF SUMMARY

Systems and methods in accordance with the present invention can overcome deficiencies in prior art systems by allowing for high-availability processing of asynchronous requests in a single transaction. A distributed request queue can be used to receive and store a service request, such as from a user or client application. A service processor can pull the service request from the request queue and invoke the service for the service request, such as to an enterprise information system. The service processor can receive the service response from the invoked service and forward the service response to a distributed response queue. The distributed response queue can hold the service response until the response is retrieved for the user or client application. An application view client can act on behalf of the user or client application, sending the service request to the distributed request queue and retrieving the service response from the distributed response queue. The application view client can generate failure recovery semantics for the client application in the event of a failure. The application view can also determine whether any service responses are waiting in the distributed response queue for the client application.


These systems and methods can be used in a server cluster. There can be a service processor deployed on every node in the cluster, each of which can listen to a given distributed request queue. This allows a service to be migrated between nodes in the cluster in the event of a node failure.


Other features, aspects, and objects of the invention can be obtained from a review of the specification, the figures, and the claims.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a diagram of a system in accordance with one embodiment of the present invention.



FIG. 2 is flowchart for a method that can be used with the system of FIG. 1.





DETAILED DESCRIPTION

A system and method in accordance with one embodiment of the present invention can overcome deficiencies in present asynchronous messaging systems by taking advantage of asynchronous request and response queues, as well as asynchronous request and response processors. A client may wish to invoke a service asynchronously in order to begin and/or continue processing other matters, instead of simply waiting for the response. For example, a long running process such as a batch process run against an SAP system or database can take minutes or even hours. Asynchronous requests can allow a client to send the request and then move on to other business.


The use of server clustering allows an AI component to be used in a scalable and highly available fashion. A highly available component does not have any single points of failure, and can have the ability to migrate services from failed nodes to live nodes in a cluster. Any service offered by the AI component can be targeted to several nodes in a cluster. In the event of a node failure in the cluster, the services located on the failed node can be migrated to another live node in the cluster.


In the event of a crash of a cluster or managed server, the AI application can continue accepting new work. The acceptance of new work can include deploying new and undeploying old application views and connection factories, monitoring of old application views and connection factories, delivering events from adapters, and servicing both synchronous and asynchronoushronous service invocations. An AI application can also support the manual migration of services on the failed node to a live node, such as a singleton message-driven Enterprise JavaBean (MDB) listening on a physical destination managed by a failed JMS server. Application integration can use a singleton MDB if a customer needs ordered event processing, for example. An AI application can notify users in an understandable and/or predictable way that in-flight transactions have been cancelled or rolled-back, and should be retried. Wherever possible, an AI application can retry the transaction after reestablishing connections to make use of resources on another live server.


In the event of an administration (admin) server failure, an AI application can do all the tasks mentioned with respect to a crash of a cluster or managed server. The AI application can also notify users that deployment or undeployment is unavailable while the admin server is unavailable. The AI application can still boot or reboot successfully using the previous domain and/or server configuration.


A system and method in accordance with one embodiment of the present invention allows asynchronous requests and responses to be available within a given JMS server, even in the event of a node failure. Request and response queues, such as ASYNC_REQUEST_QUEUE and ASYNC_RESPONSE_QUEUE, can be deployed as distributed queues in a cluster. A request processor, such as AsyncServiceRequestProcessor, can be packaged as an MDB. Such a system allows the processing of asynchronous requests and responses even if the JMS server that accepted the requests crashes or becomes otherwise unavailable.


In the event that a physical queue fails before an asynchronous service request is received by the appropriate MDB, the request can be unavailable until the physical queue comes back on line. This can hold true for asynchronous service responses. Using a system in accordance with one embodiment of the present invention, an asynchronous service processor MDB can be deployed on a single distributed JMS queue, such as ASYNC_REQUEST_QUEUE. This deployment removes the need to maintain and manage a pool of asynchronous request processor threads. An asynchronous service processor MDB can be last in the deployment order for the Al application, and can be deployed from a JAR file such as “ai-asyncprocessor-ejb.jar.”



FIG. 1 shows an example of a high-availability asynchronous service processing system in accordance with one embodiment of the present invention. An application view client 100 has the ability to generate and deal with failure recovery semantics without the user having any knowledge or input. For instance, a client application that sends off a request might crash or otherwise become unavailable at some point before the response is received. When the response is ready to be returned, the response can sit in an asynchronous response queue 112 until the client comes back. When the client 100 is available again, the client will want to receive the response. Since the system is utilizing distributed queues, the client application would need to go out to the server and determine whether there are any responses from previous requests that were sent before the failure. The application view client 100 can take care of this determination behind the scenes, such that the user or client application does not need to do anything to find the response.


The user or client application making the request can register a message listener 106, such that the user or client application can be informed that a message is ready and waiting to be received. An asynchronous service processor 110 can pull a request off the asynchronous request queue 108, invoke the asynchronous service against an Enterprise Information System (EIS) 118, and wait for the response. When the asynchronous service response comes back, the asynchronous service processor 110 can put the response onto the response queue 112. In this embodiment, this processing is accomplished as a single transaction.


The application view client 100 can instantiate an application view instance 102. The client 100 can have the option of supplying a durable client identifier at the time of construction. The durable client identifier can be used as a correlation identifier for asynchronous response messages. The client 100 can invoke an asynchronous service method, such as “invokeServiceAsync”, and can pass a request document and response listener 104, such as AsyncServiceResponseListener, to handle the response.


An application view instance 102 can create a service request object, such as AsyncServiceRequest, and can send the object to a request queue 108, such as ASYNC_REQUEST_QUEUE. The service request object can contain the name of the destination to which the response listener is pinned. A service processor MDB 110 can use this information to determine the physical destination to receive the response. If the request object does not contain the name of a response destination, the service processor MBD 110 can use the destination set on the JMS message via a call to a method such as JMSReplyTo( ). If a client only supplies a service response listener 104 to the application view, such as:

    • invokeServiceAsync(String serviceName, IDocument request, AsyncServiceResponseListener listener);


      the application view can establish a JMS queue receiver to the JMS queue bound at a JNDI location provided by an application view Enterprise JavaBean (EJB) method, such as getAsyncResponseQueueJNDIName( ). The application view instance 102 can use QueueReceiver::getQueue( ) to set the ReplyTo destination on the request message.


In a cluster, an asynchronous request queue 108 can be deployed as a distributed JMS queue. Each message can be sent to a single physical queue, and not be forwarded or replicated in any way. As such, the message is only available from the physical queue to which it was sent. If that physical queue becomes unavailable before a given message is received, the message or AsyncServiceRequest can be unavailable until that physical queue comes back on-line. It is not enough to send a message to a distributed queue and expect the message to be received by a receiver of that distributed queue. Since the message is sent to only one physical queue, there must be a queue receiver receiving or listening on that physical queue. Thus, an AI asynchronous service processor MDB can be deployed on all nodes in a cluster.


An asynchronous service processor MDB can receive the message from the queue in a first-in, first-out (FIFO) manner. The service processor can use the asynchronous service request object in a JMS ObjectMessage to determine the qualified name, service name, request document, and response destination of the application view. The asynchronous service processor 110 can use an application view EJB 114 to invoke the service synchronously. The service can be translated into a synchronous CCI-based request and/or response to the resource adapter 116.


When an asynchronous service processor MDB 110 receives the response, the response can be encapsulated into an asynchronous service response object and sent to the response destination provided in the asynchronous service request object. The asynchronous service processor MDB 110 cannot just send the response to the asynchronous response queue 112, the response needs to be sent to a specific physical destination. This specific physical destination, or queue, can have been established by the application view instance 102 running on the client when, for example, an application view EJB method such as getAsyncResponseQueueJNDIName( ) was called.


If the client application fails and a new application view is created with the same durable client identifier, there is a chance that the new application view will be pinned to a different physical JMS queue than the JMS queue that the client was using prior to the failure. Consequently, the application view can use recover logic to query the other members for responses that match the durable client identifier once the client application restarts.


An application view message listener 106 instance, created when the application view instance 102 was instantiated, can receive the asynchronous service response message as a JMS ObjectMessage, and can pass the message to the asynchronous service response listener 104 supplied in the “invokeServiceAsync” call.



FIG. 2 shows the steps of a method that can be used with the system of FIG. 1. First, a service request is received to a distributed request queue from a client application 200. The service request is pulled from the request queue to a service processor 202. If the service processor is down, another service processor in the cluster pulls the service request 204. A service is invoked for the service request, such as to an EIS 206. The service response is retrieved by the service processor and forwarded to a distributed response queue for storage until retrieval from a client application 208. A response listener listens to the response queue and notifies the client application when the service response is received 210.


The foregoing description of preferred embodiments of the present invention has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations will be apparent to one of ordinary skill in the art. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, thereby enabling others skilled in the art to understand the invention for various embodiments and with various modifications that are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the following claims and their equivalence.

Claims
  • 1. A system for high-availability processing of asynchronous requests in a single transaction, comprising: a client application that instantiates an application view instance and passes a durable client identifier to the application view instance, said application view instance configured to create a service request;an asynchronous request queue distributed over a server cluster and configured for receiving said service request from the application view instance and for storing said service request in a single physical request queue;a service processor distributed over the server cluster and for pulling the service request from the single physical request queue and for invoking a service for the service request wherein the service processor is further configured for receiving a service response for the service request from the invoked service; andan asynchronous response queue distributed over the server cluster and configured for receiving the service response from the service processor and for storing the service response in a single physical response queue;wherein said client application is configured to query the asynchronous response queue distributed over the server cluster for responses that match the durable client identifier in an event of failure and restart of said client application.
  • 2. A system according to claim 1, further comprising: an enterprise information system containing the service invoked by the service processor.
  • 3. A system according to claim 1, wherein: said service processor is packaged as a message-driven Enterprise JavaBean.
  • 4. A system according to claim 1, wherein: the application view client is configured for sending the service request to the distributed request queue and retrieving the service response from the distributed response queue on behalf of the client application.
  • 5. A system according to claim 4, wherein: said application view client can generate failure recovery semantics for the client application.
  • 6. A system according to claim 1, wherein: the asynchronous response queue is configured to store the service response until the response is retrieved by the application view client.
  • 7. A system according to claim 1, wherein: said application view is configured to determine whether any service responses are waiting in the asynchronous response queue for the client application.
  • 8. A system according to claim 1, wherein: the client identifier is configured for identifying the client application and used to process the service request and service response for the client application.
  • 9. A system according to claim 1, wherein: said application view client passes the service request to the asynchronous request queue in a request document.
  • 10. A system according to claim 9, wherein: said application view further passes a service response listener with the request document, the service response listener configured to listen for the service response corresponding to the service request document.
  • 11. A system according to claim 1, wherein: said service processor is deployed on a node in a cluster.
  • 12. A system according to claim 11, further comprising: additional service processors, each additional service processor deployed on different node in the cluster.
  • 13. A system according to claim 12, wherein: the additional service processors are configured to listen to the asynchronous request queue for a service request, each of the additional service processors capable of pulling the service request from the asynchronous request queue and invoking the service for the service request if the service processor is unavailable.
  • 14. A system according to claim 1, wherein: said service processor further encapsulates the service response into a service response object that is sent to the asynchronous response queue.
  • 15. A method for high-availability processing of asynchronous requests in a single transaction, comprising: instantiating an application view client by an application and passing a durable client identifier to the application view client, said application view client configured to create a service requestmaintaining an asynchronous request queue distributed over a server cluster for storing service requests from client applications;maintaining an asynchronous response queue distributed over the server cluster for storing service responses from invoked services;maintaining a service processor distributed over the server cluster for servicing service invocations;receiving a service request to the request queue from the application;pulling the service request from the asynchronous request queue to a service processor and invoking a service for the service request;receiving the service response from the invoked service to the asynchronous response queue and storing the service response until retrieval by the application; andquerying the asynchronous response queue distributed over the server cluster for responses that match the durable client identifier in an event of failure and restart of said client application.
  • 16. A method according to claim 15, further comprising: executing the invoked service using an enterprise information system.
  • 17. A method according to claim 15, further comprising: deploying an additional service processor on each node of the duster containing the service processor.
  • 18. A method according to claim 17, further comprising: listening to the asynchronous request queue using with the service processor and any additional service processors.
  • 19. A method according to claim 15, further comprising: packaging the service processor as a message-driven Enterprise JavaBean.
  • 20. A method according to claim 15, wherein: the application view client is configured to send service requests and receive service responses on behalf of the client application.
  • 21. A method according to claim 20, further comprising: generating failure recovery semantics using the application view client.
  • 22. A method according to claim 15, wherein: the durable client identifier is assigned to the service request to be used in processing the service request and service response.
  • 23. A method according to claim 15, wherein: the step of receiving a service request includes passing a request document and response listener to the service processor.
  • 24. A system for high-availability processing of asynchronous requests in a single transaction, comprising: an asynchronous request queue distributed over a server cluster and configured for receiving and storing a service request;an application view client for sending the service request to the request queue on behalf of a client application;a service processor distributed over the server duster and configured for pulling the service request from the request queue and invoking the service for the service request, the service processor further receiving a service response for the service request from the invoked service; andan asynchronous response queue distributed over the server cluster and configured for receiving the service response from the service processor and storing the service response until the service response is retrieved for the client application by the application view client.
  • 25. A system for high-availability processing of asynchronous requests in a single transaction, comprising: an application view client for generating a service request on behalf of a client application, the service request comprising a request document and a service response listener;a request queue distributed over a server cluster and configured for receiving the service request from the application view client and storing the service request;a service processor distributed over the server cluster and configured for pulling the service request from the request queue and invoking the service specified in the request document, the service processor further receiving a service response for the request document from the invoked service; anda response queue distributed over the server cluster and configured for receiving the service response from the service processor and storing the service response until the service response is retrieved for the client application by the application view client, the response listener notifying the application view client when the service response is received in the distributed response queue.
  • 26. A system for high-availability processing of asynchronous requests in a single transaction, comprising: means for maintaining an asynchronous request queue distributed over a server cluster for storing service requests from client applications;means for maintaining an asynchronous response queue distributed over the server cluster for storing service responses from invoked services;means for maintaining a service processor distributed over the server duster for servicing service invocations;means for receiving a service request to the request queue from a client application;means for pulling the service request from the request queue to a service processor and invoking a service for the service request; andmeans for receiving the service response from the invoked service to the response queue and storing the service response until retrieval from a client application.
  • 27. A computer system for high-availability processing of asynchronous requests in a single transaction comprising: a processor a computer readable medium, and, object code executed by said processor, and embodied on said computer readable medium said object code configured to: maintain an asynchronous request queue distributed over a server cluster for storing service requests from client applications;maintain an asynchronous response queue distributed over the server cluster for storing service responses from invoked services;maintain a service processor distributed over the server cluster for servicing service invocations;receive a service request to the request queue from a client application;pull the service request from the request queue to a service processor and invoke a service for the service request; andreceive the service response from the invoked service to the response queue and store the service response until retrieval from a client application.
CLAIM OF PRIORITY

This application claims priority to U.S. Provisional Patent Application No. 60/377,332, filed May 2, 2002, entitled “HIGH AVAILABILITY FOR ASYNCHRONOUS REQUESTS,” which is hereby incorporated herein by reference.

US Referenced Citations (167)
Number Name Date Kind
5283897 Georgiadis et al. Feb 1994 A
5321841 East et al. Jun 1994 A
5544318 Schmitz et al. Aug 1996 A
5748975 Van De Vanter May 1998 A
5801958 Dangelo et al. Sep 1998 A
5828847 Gehr et al. Oct 1998 A
5835769 Jervis et al. Nov 1998 A
5836014 Faiman, Jr. Nov 1998 A
5862327 Kwang et al. Jan 1999 A
5892913 Adiga et al. Apr 1999 A
5933838 Lomet Aug 1999 A
5950010 Hesse et al. Sep 1999 A
5951694 Choquier et al. Sep 1999 A
5961593 Gabber et al. Oct 1999 A
5966535 Benedikt et al. Oct 1999 A
6012094 Leymann et al. Jan 2000 A
6021443 Bracho et al. Feb 2000 A
6023722 Colyer Feb 2000 A
6028997 Leymann et al. Feb 2000 A
6029000 Woolsey et al. Feb 2000 A
6044217 Brealey et al. Mar 2000 A
6061721 Ismael et al. May 2000 A
6067623 Blakley, III et al. May 2000 A
6070184 Blount et al. May 2000 A
6078943 Yu Jun 2000 A
6081840 Zhao Jun 2000 A
6085030 Whitehead et al. Jul 2000 A
6119143 Dias et al. Sep 2000 A
6119149 Notani Sep 2000 A
6128279 O'Neil et al. Oct 2000 A
6131118 Stupek, Jr. et al. Oct 2000 A
6141686 Jackowski et al. Oct 2000 A
6154738 Call Nov 2000 A
6154769 Cherkasova et al. Nov 2000 A
6189044 Thomson et al. Feb 2001 B1
6195680 Goldszmidt et al. Feb 2001 B1
6222533 Notani et al. Apr 2001 B1
6226666 Chang et al. May 2001 B1
6226675 Meltzer et al. May 2001 B1
6226788 Schoening et al. May 2001 B1
6230160 Chan et al. May 2001 B1
6230287 Pinard et al. May 2001 B1
6230309 Turner et al. May 2001 B1
6233607 Taylor et al. May 2001 B1
6237135 Timbol May 2001 B1
6253230 Couland et al. Jun 2001 B1
6269373 Apte et al. Jul 2001 B1
6282711 Halpern et al. Aug 2001 B1
6292830 Taylor et al. Sep 2001 B1
6292932 Baisley et al. Sep 2001 B1
6317786 Yamane et al. Nov 2001 B1
6324681 Sebesta et al. Nov 2001 B1
6330569 Baisley et al. Dec 2001 B1
6336122 Lee et al. Jan 2002 B1
6338064 Ault et al. Jan 2002 B1
6343265 Glebov et al. Jan 2002 B1
6345283 Anderson Feb 2002 B1
6348970 Marx Feb 2002 B1
6349408 Smith Feb 2002 B1
6353923 Bogel et al. Mar 2002 B1
6356906 Lippert et al. Mar 2002 B1
6360221 Gough et al. Mar 2002 B1
6374297 Wolf et al. Apr 2002 B1
6377939 Young Apr 2002 B1
6393605 Loomans May 2002 B1
6408311 Baisley et al. Jun 2002 B1
6438594 Bowman-Armuah Aug 2002 B1
6442565 Tyra et al. Aug 2002 B1
6442611 Navarre et al. Aug 2002 B1
6445711 Scheel et al. Sep 2002 B1
6463503 Jones et al. Oct 2002 B1
6515967 Wei et al. Feb 2003 B1
6535908 Johnson et al. Mar 2003 B1
6549949 Bowman-Amuah Apr 2003 B1
6553425 Shah et al. Apr 2003 B1
6560769 Moore et al. May 2003 B1
6594693 Borwankar Jul 2003 B1
6594700 Graham et al. Jul 2003 B1
6604198 Beckman et al. Aug 2003 B1
6622168 Datta Sep 2003 B1
6678518 Eerola Jan 2004 B2
6687848 Najmi Feb 2004 B1
6697849 Carlson Feb 2004 B1
6721747 Lipkin Apr 2004 B2
6732237 Jacobs et al. May 2004 B1
6748420 Quatrano et al. Jun 2004 B1
6782416 Cochran et al. Aug 2004 B2
6795967 Evans et al. Sep 2004 B1
6802000 Greene et al. Oct 2004 B1
6804686 Stone et al. Oct 2004 B1
6850979 Saulpaugh et al. Feb 2005 B1
6857012 Sim et al. Feb 2005 B2
6859834 Arora et al. Feb 2005 B1
6889244 Gaither et al. May 2005 B1
6910041 Exton et al. Jun 2005 B2
6915519 Williamson et al. Jul 2005 B2
6918084 Slaughter et al. Jul 2005 B1
6925482 Gopal Aug 2005 B2
6925492 Shirriff Aug 2005 B2
6950825 Chang Sep 2005 B2
6963914 Breitbart et al. Nov 2005 B1
6970939 Sim Nov 2005 B2
6976086 Sadeghi et al. Dec 2005 B2
6983328 Beged-Dov et al. Jan 2006 B2
7047287 Sim et al. May 2006 B2
7054858 Sutherland May 2006 B2
7058014 Sim Jun 2006 B2
7058637 Britton et al. Jun 2006 B2
7117504 Smith et al. Oct 2006 B2
20020004848 Sudarshan et al. Jan 2002 A1
20020010803 Oberstein et al. Jan 2002 A1
20020016759 Marcready et al. Feb 2002 A1
20020026630 Schmidt et al. Feb 2002 A1
20020078365 Burnette et al. Jun 2002 A1
20020083075 Brummel et al. Jun 2002 A1
20020083118 Sim Jun 2002 A1
20020083187 Sim et al. Jun 2002 A1
20020111820 Massey Aug 2002 A1
20020120685 Srivastava et al. Aug 2002 A1
20020120786 Sehayek et al. Aug 2002 A1
20020133491 Sim et al. Sep 2002 A1
20020152106 Stoxen et al. Oct 2002 A1
20020165936 Alston et al. Nov 2002 A1
20020184145 Sijacic et al. Dec 2002 A1
20020184610 Chong et al. Dec 2002 A1
20020194495 Gladstone et al. Dec 2002 A1
20020198800 Shamrakov Dec 2002 A1
20030004746 Kheirolomoom et al. Jan 2003 A1
20030005181 Bau et al. Jan 2003 A1
20030014439 Boughannam Jan 2003 A1
20030018665 Dovin et al. Jan 2003 A1
20030018832 Amirisetty et al. Jan 2003 A1
20030031176 Sim Feb 2003 A1
20030033437 Fischer et al. Feb 2003 A1
20030043191 Tinsley et al. Mar 2003 A1
20030046266 Mullins et al. Mar 2003 A1
20030055868 Fletcher et al. Mar 2003 A1
20030055878 Fletcher et al. Mar 2003 A1
20030061405 Fisher et al. Mar 2003 A1
20030074467 Oblak et al. Apr 2003 A1
20030093402 Upton May 2003 A1
20030093403 Upton May 2003 A1
20030093470 Upton May 2003 A1
20030093471 Upton May 2003 A1
20030097345 Upton May 2003 A1
20030097574 Upton May 2003 A1
20030105884 Upton Jun 2003 A1
20030110117 Saidenberg et al. Jun 2003 A1
20030110315 Upton Jun 2003 A1
20030110446 Nemer Jun 2003 A1
20030126136 Omoigui Jul 2003 A1
20030145047 Upton Jul 2003 A1
20030149791 Kane et al. Aug 2003 A1
20030167358 Marvin et al. Sep 2003 A1
20030182452 Upton Sep 2003 A1
20030196168 Hu Oct 2003 A1
20030233631 Curry Dec 2003 A1
20040015368 Potter et al. Jan 2004 A1
20040068568 Griffin Apr 2004 A1
20040133660 Junghuber et al. Jul 2004 A1
20040148336 Hubbard et al. Jul 2004 A1
20040204976 Oyama et al. Oct 2004 A1
20040216086 Bau Oct 2004 A1
20040225995 Marvin et al. Nov 2004 A1
20040260715 Mongeon et al. Dec 2004 A1
20050033663 Narin et al. Feb 2005 A1
20050223392 Cox et al. Oct 2005 A1
Foreign Referenced Citations (3)
Number Date Country
1 006 443 Jun 2000 EP
1 061 445 Dec 2000 EP
WO 0190884 Nov 2001 WO
Related Publications (1)
Number Date Country
20040015368 A1 Jan 2004 US
Provisional Applications (1)
Number Date Country
60377332 May 2002 US