High availability application view deployment

Information

  • Patent Grant
  • 7526519
  • Patent Number
    7,526,519
  • Date Filed
    Wednesday, November 13, 2002
    22 years ago
  • Date Issued
    Tuesday, April 28, 2009
    15 years ago
Abstract
High availability is obtained for the deployment and undeployment of application views by placing a redundant JMX server on each server in a cluster of servers for an application integration system. Each redundant JMX server can manage deployment work for the cluster, and is capable of sending a JMX notification to every other server in the cluster relating to the deployment work, such as a deploy, undeploy, or processing notification. While an administration server can manage the other servers in the cluster, the redundant JMX servers are capable of managing deployment work for the cluster in the event of a failure of the administration server.
Description
COPYRIGHT NOTICE

A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document of the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.


CROSS-REFERENCED CASES

The following applications are cross-referenced and incorporated herein by reference:


U.S. patent application Ser. No. 10/271,194 entitled “Application View Component for System Integration, ” by Mitch Upton, filed Oct. 15, 2002, now U.S. Pat. No. 7,080,092 issued Jul. 18, 2006.


U.S. patent application Ser. No. 10/293,059 entitled “High Availability for Asynchronous Requests,” by Tim Potter et al., filed Nov. 13, 2002.


U.S. patent application Ser. No. 10/293,656 entitled “High Availability for Event Forwarding,” by Tim Potter et at., filed Nov. 13, 2002.


U.S. patent application Ser. No. 10/293,674 entitled “High Availability Event Topic,” by Tim Potter et al., filed Nov. 13, 2002.


FIELD OF THE INVENTION

The present invention relates to the deployment and undeployment of components such as application view components.


BACKGROUND

In present application integration (AI) systems, there can be several single points of failure. These single points of failure can include deployment or management facilities, event forwarding, event topics, remote clients, event subscriptions, response listeners, and response queues. Each of these features is tied to a single server within a server cluster. If that single server crashes, the entire AI application can become irreparably damaged and must be rebooted via a server reboot. For example, an entity in a present AI system can be pinned to the administration (“admin”) server for the cluster. If the admin server goes down, entity functions such as the deployment and undeployment of application views will be unavailable to the system while the admin server is unavailable.


BRIEF SUMMARY

Systems and methods in accordance with the present invention can overcome deficiencies in prior art systems by changing the way in which work is processed. High-availability management of application views can be obtained for application integration by utilizing redundancy in a cluster of servers. A redundant JMX server can exist on each server in the cluster of servers. Each redundant JMX server is capable of managing deployment work for the cluster, such as the deployment and undeployment of application views. Each redundant JMX server can also send a JMX notification to every other server in the cluster relating to the deployment work, such as a deploy, undeploy, or processing notification.


An administration server in the cluster can be used to manage the other servers in the cluster. The redundant JMX servers can be capable of managing the deployment work for the cluster in the event of a failure of the administration server. The redundant JMX servers can notify the administration server when the deployment work is completed, or the administration server can be configured to check the redundant JMX servers periodically for deployment work. JMX MBeans can be used to represent the state of deployed application view. These JMX MBeans can include deployment MBeans, runtime MBeans, and summary MBeans. The JMX MBeans can be generated for a user using a common management model framework, for example.


Other features, aspects, and objects of the invention can be obtained from a review of the specification, the figures, and the claims.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a diagram of a system in accordance with one embodiment of the present invention.



FIG. 2 is a flowchart showing a method that can be used with the system of FIG. 1.





DETAILED DESCRIPTION

A system and method in accordance with one embodiment of the present invention overcomes deficiencies in prior art application integration systems by changing the way in which server functions are handled. In order to eliminate one single point of failure that exists in a clustering environment, each managed server can utilize a local Java Management Extension (JMX) server. Use of a redundant JMX server on each managed node or server in a cluster can provide for the high availability of services and functions handled by those servers. Major problems can be avoided, as the AI system is not relying on a single administration server. Each managed server can have the ability to manage deployment and undeployment work. When a managed server finishes any such work, the managed server can send a notification using the JMX framework to inform the other servers in the cluster that the work has been completed. Until such a notification is sent, managed servers can be ready to take over the work.


One of the advantages of using JMX is the ability to utilize JMX notification functionality. In some systems, the failure of an admin server prevents the processing of new deployments or undeployments. The failure will not, however, prevent the continued processing of existing tasks. This continued processing is possible in part because the JMX servers are redundant across the cluster. As shown in FIG. 1, there can be a JMX server 106, 114, 116, 118 on each managed server 108, 110, 112 in the server cluster 102, as well as on the admin server 104. A client application 100 or user can request the deployment of an application view, for example. That request can be handled by a server in the cluster 102, as directed by the admin server 104. If the JMX server 106 on the admin server 104 goes down, the cluster 102 still has the ability to process existing work using the redundant managed servers 108, 110, 112. If a new deployment is attempted, the deployment may be successful on the node that is contacted in the cluster. The other managed servers in the cluster, however, may not be aware of this new deployment or undeployment. If the admin server is down, a user can be advised to not attempt a new deployment or undeployment of application use.


This might not be a cause for concern in a production environment, as it may be rare to do a new application view deployment or undeployment. Entities such as application views are typically changed, updated, or removed in a maintenance window. It can be sufficient that work that has already been deployed and is running successfully will continue to be processed by other managed servers if a managed server goes down or becomes unavailable. A user may not be able to do any new deployment or undeployment, but the cluster will be as it stood before the admin server went down and can continue to work.


A method using the system of FIG. 1 is shown in FIG. 2. In a cluster of servers for an AI application, an administration server is selected to manage other servers in the cluster 200. The existence of a JMX server on the administration server capable of managing deployment across the cluster is ensured, as well as the existence of at least one other redundant JMX server on a managed server in the cluster 202. Once the system is set up, a deployment request can be received from a user or client application, such as for the deployment of an application view component 204. The administration server can select a managed sever to handle the deployment 206. The selected managed server can then handle the deployment request 208. In the event of a failure of the administration server and/or the selected managed server, the execution and/or management of steps 206 and 208 can be migrated to a redundant JMX server on a managed server in the cluster 210. When the deployment is complete, a notification can be sent to the other servers in the cluster that the deployment is complete 212.


In other systems, each managed server can be configured to send a message, such as by multicast, to the other servers in the cluster when a deployment or undeployment occurs. This allows the other managed servers in the cluster to be aware of the deployment or undeployment, even though the admin server is unavailable. When the admin server becomes available again, it will be unaware that the deployment or undeployment occurred. This can be handled in a number of ways. For instance, a notification can be sent using the JMX framework. The admin server also can be configured to periodically check for new deployments or undeployments. The managed server accepting the deployment or undeployment can periodically attempt to contact the admin server until the admin server is notified, or can send a multicast “heartbeat” periodically to the cluster. Also, it is possible to use an event queue that will store the notification for the admin server until the admin server is available to receive the notification. Other notification methods can be used as appropriate.


High availability for application views is obtained with JMX, in one embodiment, by implementing special Management JavaBean components (MBeans) for the application views. A set of JMX MBeans can represent the state of an application view deployment within a server or server cluster. These MBeans can provide users of the AI component with the ability to see which application views have been deployed. The MBeans can also allow a user to modify properties of an application view deployment, such as pool sizes and log levels, as well as allowing the user to monitor the activity in the application view.


An application view MBean can provide a single integration point for deploy and undeploy operations, which can be managed by an application view deployer or server console. The MBean can also provide persistence. Once an MBean is deployed, the MBean can be redeployed automatically when the server restarts. The MBean can support collection of both per-server and per-cluster statistics. The MBean can be used to monitor the number of active clients, the number of events delivered of a given type, the number of times a service of a given type has been invoked, and the number of event delivery and service invocation errors encountered. The MBean can also support console integration for monitoring deploy and undeploy operations.


Deployment and management of application views can be achieved by creating an instance of custom MBeans. In one embodiment, there are three types of custom MBeans used with an application view. One such bean type is a deployment MBean. A deployment MBean can represent the deployment of the application view, as well as the static information created for the application view at design-time. Instances of this MBean can be persistent, can have cluster scope, and can be targeted at all instances in a cluster to allow for managed server independence. Instances can boot without an admin server to feed them MBeans. In single-instance or non-cluster servers, there can be a single deployment MBean.


A runtime MBean can be used to represent the runtime state of the application view within an active server. Instances of this MBean may not be persistent, but can have server-specific or local scope and can be targeted at all instances in a cluster. In single-instance or non-cluster servers, there can be a single runtime MBean.


A summary MBean can be used to aggregate statistics from the runtime MBeans in the instance servers for a cluster. Instances of this MBean may not be persistent, but can have cluster scope and can be targeted at all instances in a cluster. In single-instance or non-cluster servers, there can be a single runtime MBean, but there will still be a summary MBean to provide consistent access to statistics in both cluster and non-cluster environments.


An application view deployment MBean can represent the atomic deployment of an application view. It can contain an attribute representing the application view descriptor. This descriptor attribute can be used to persistently deploy an application view at server start-up. This can remove the need for an integration startup deployer, used in current systems, as well as the interaction between the deployer and the admin deploy manager to retrieve persistently deployed application view names from an AI properties file. The deployment of these application views can also be facilitated.


High availability components can take advantage of a common management model (Commo) framework. In a Commo framework, a descriptor can be filled out and high-level metadata can be given about an object. This metadata can be run through an MBean generation tool, which can code-generate Java classes to be added into the specific implementation details. The high-level interface to the MBean is defined in the descriptor. The descriptor and metadata can be run through the code generation tool, which generates “skeleton” Java code. Once the skeleton Java code is generated, the user, client, or application can fill in the MBean-specific details to generate a “typical” Java class.


An application view can utilize metadata that includes information such as the service name and associated system function. The metadata can also store at least some of the data needed to successfully invoke the system function. As a result, the service can require less request data from the client invoking service, as the application view can augment the data passed by the client with the stored metadata. This can be a convenient way to hide the complexity of the underlying system function invocation from the client invoking a service.


An application view deployer can create a new application view deployment MBean instance when an application view is deployed. When the application view deployment MBean is created, a registration notification can be broadcast to all interested servers in a cluster. Interested servers can be indicated as part of a “.mdf” descriptor file for a Commo MBean. When the server receives the registration notification, it can retrieve the newly-registered application view descriptor and use the descriptor to update the appropriate application view deployment cache. At this point, the server can register for changes to the application view descriptor attribute. All subsequent attribute change notifications can allow any interested server to update its application view deployment cache. Each server in the cluster will not be dependent on JMS messages to keep its cache up-to-date.


An application view runtime MBean can handle maintenance of runtime statistics and an application view deployment cache on a local server. When a new instance of the application view runtime MBean is created, it can add an entry to the application view deployment cache for use by an application view EJB on a local server.


When an application view is deployed through use of an application view deployer, certain MBeans can be created, such as one application view deployment MBean per server instance. There can also be one application view runtime MBean created per server instance and one application view summary MBean per server instance. The application view deployment MBean can contain an application view descriptor. The application view runtime MBean can have methods to get and update statistics. The application view summary MBean can have the same interface as the application view runtime MBean, but only for a task such as getting statistics. The implementation of the getter methods can search the instances in the cluster for application view runtime MBeans and return aggregate statistics. When an application view is undeployed, all MBeans deployed in the deploy phase can be deleted.


In order to track event statistics, an application view runtime MBean can have an attribute such as “EventCount” that tracks the total number of events of a given type delivered through the current application view, such as the total number delivered to all clients. This counter can be updated any time the event context sends an event.

    • public int getEventCount(String eventType);
    • public void incrementEventCount(String eventType);


The number of event delivery attempts that end in error can be tracked, such as with an “EventErrorCount” attribute. This attribute can be incremented any time an exception is thrown, such as from EventContext.postEvent( ).

    • public int getEventErrorCount(String eventType);
    • public void incrementEventErrorCount(String eventType);


In order to track service statistics, an application view runtime MBean can have an attribute such as “ServiceCount” that tracks the total number of invocations made on a given service for the current application view, such as the total number from all clients. This counter can be updated any time one of the “invokeService” methods is called.

    • public int getServiceCount(String serviceName);
    • public void incrementServiceCount(String serviceName);


The number of asynchronous invocations made on a given service can be tracked with an attribute such as “AsyncServiceCount”. This attribute can be incremented when one of the “invokeServiceAsync” methods is called.

    • public int getAsyncServiceCount(String serviceName);
    • public void incrementAsyncServiceCount(String serviceName);


The number of service invocations that end in error can be tracked with an attribute such as “ServiceErrorCount”. This attribute can be incremented any time an exception is thrown from invokeService methods.

    • public int getServiceErrorCount( );
    • public void setServiceErrorCount(int count);


      Additional attributes that might be useful can track the minimum, maximum, and average service execution times, as well as event delivery rate.


In order to track client statistics, an application view runtime MBean can have an attribute such as “ClientCount” that can track the total number of aplication view clients that currently depend on this application view deployment. This counter can be updated any time a new application view object is constructed, and decremented anytime the finalizer or “close” is called.

    • public int getClientCount( );
    • public void incrementClientCount( );
    • public void decrementClientCount( );


In the overall AI system, an AI application can continue delivering events from adapters running in available nodes if a cluster server or managed server crashes. Event generators or routers running in the failed node can restart when the failed node restarts. Users can be notified that in-flight transactions have been cancelled or rolled-back, and should be retried. Wherever possible the transaction can be retried after reestablishing connections, in order to make use of resources on another live server. One example of AI reestablishing a connection is the event context as used for sending events to AI from an event router.


In the event of an admin server failure, an AI application can do the tasks listed with respect to the crash of a cluster server. The AI application should still be able to boot and reboot successfully using the previous domain and server configuration.


The use of server clustering allows an AI component, such as an event-forwarding server, event queue, or JMS server, to be used in a scalable and highly available fashion. A highly available component does not have any single points of failure, and can migrate services from failed nodes to live nodes in a cluster. Any service offered by an AI component can be targeted to several nodes in a cluster. In the event of a node failure in the cluster, the services located on the failed node can be migrated to another live node in the cluster.


In the event of a crash of a cluster or managed server, the AI application can continue accepting new work. The acceptance of new work can include the deploying and undeploying of application views and connection factories, monitoring of old application views and connection factories, delivering events from adapters, and servicing both synchronous and asynchronous service invocations. An AI application can also support the manual migration of services on the failed node to a live node, such as a singleton MDB listening on a physical destination managed by a failed JMS server. Application integration can use a singleton MDB, such as if a customer needs ordered event processing.


The foregoing description of the preferred embodiments of the present invention has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations will be apparent to the practitioner skilled in the art. Embodiments were chosen and described in order to best describe the principles of the invention and its practical application, thereby enabling others skilled in the art to understand the invention, the various embodiments and with various modifications that are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the following claims and their equivalents.

Claims
  • 1. A system for high-availability management of application view components for application integration, comprising: a cluster of server nodes;a redundant Management server on each node in the cluster,wherein the redundant management server is a Java Management Extension (JMX) server,wherein the redundant management server manages deployment of application view components that integrate applications across the cluster such that the redundant management server on a first node in the cluster transmits a notification to the redundant management server on a second node in the cluster upon completing the deployment of an application view component and wherein a redundant management server on the second node takes over processing of said deployment of the application view component in case of failure of the redundant server on the first node;an administration server located on a node in the cluster having the redundant management server thereon wherein the administration server receives a component deployment request from a client, selects a node from the cluster and instructs the selected node in the cluster to deploy the application view component; anda set of managed beans deployed on each redundant management server for representing state of application deployments on the cluster, said managed beans including: a deployment bean that represents the deployment of the application view;a runtime bean that represents runtime state of the application view component within a node in the cluster; anda summary bean that aggregates statistics from the runtime beans deployed in all the nodes in the cluster; andan application view deployment cache located on one or more of the nodes in said cluster, wherein an entry is added to the application view deployment cache upon instantiating the runtime bean;wherein the redundant management server located on another node in the cluster takes over processing deployment requests in event of a failure of the administration server;wherein the redundant management server processing deployment requests in the event of administration server failure periodically attempts to contact the administration server until the administration server is notified of new deployments that have occurred since the administration server became unavailable;and wherein the administration server is notified of new application view deployments which have occurred during said failure by utilizing the managed beans in the event that the administration server becomes available again after said failure.
  • 2. A system according to claim 1, wherein: each redundant management server further manages undeployment of application view components for the cluster.
  • 3. A system according to claim 1, wherein: the redundant management server multicasts the notification to every other redundant management server in the cluster relating to the deployment.
  • 4. A system according to claim 1, wherein: each redundant management server sends the notification selected from a group consisting of deploy notifications, undeploy notifications, and processing notifications.
  • 5. A system according to claim 1, wherein: a redundant management server on at least one of the nodes in the cluster deploys the application view component for the cluster in event of a failure of the management server handling the deployment request.
  • 6. A system according to claim 1, wherein: the administration server checks the redundant management server managing application deployment in the event of a failure of the administration server to determine whether the deployment is complete.
  • 7. A system according to claim 1, further comprising: a descriptor containing metadata that is used by a code generation tool to create skeleton Java classes for the set of managed beans.
  • 8. A system according to claim 1, wherein the set of managed beans is redeployed when the management server restarts.
  • 9. A system according to claim 8, wherein: said application view component is an application view integration component that services synchronous and asynchronous service invocations from clients and delivers events from adapters to the clients.
  • 10. A method for providing high-availability deployment of application view components, comprising: receiving a deployment request for an application view component from a client application to a cluster of servers, the cluster having one or more redundant Java Management Extension (JMX) servers distributed thereon, wherein the application view component integrates applications;selecting a managed server in the cluster of servers to handle the request, the selecting being done using a Java Management Extension (JMX) server on an administration server in the cluster;periodically checking each JMX server for deployment work using the administration server, wherein the administration server is configured to periodically check for new deployments and undeployments,handling the deployment request on the managed server selected by the administration server, the selected managed server containing a redundant JMX server that takes over processing deployment requests for the JMX server on the administration server during failure of the administration server wherein the redundant JMX server hosts beans for representing the state of component deployment, said beans further including: a deployment bean that represents the deployment of the application view component: a runtime bean that represents runtime state of the application view component within a node in the cluster; anda summary bean that aggregates statistics from the runtime beans deployed in all the nodes in the cluster;maintaining an application view deployment cache on one or more nodes in the cluster; adding an entry to the application view deployment cache upon instantiating the runtime bean:sending a notification to the other servers in the cluster of servers when the selected managed server has completed the deployment request such that each redundant JMX server in the cluster is informed of the completed deployment; anddetermining components that have been deployed during a failure of the administration server, said determining being performed by the administration server utilizing the beans after the administration server becomes available.
  • 11. A method according to claim 10, further comprising: migrating the handling of the deployment to a second managed server in the cluster of servers, the second managed server containing a redundant JMX server.
  • 12. A method according to claim 10, further comprising: deploying a redundant JMX server on each managed server in the cluster of server.
  • 13. A method according to claim 10, wherein: sending the notification is accomplished by multicasting.
  • 14. A method according to claim 10, wherein: sending the notification is accomplished by heartbeating the notification until it is received by each server in the cluster of servers.
  • 15. A method according to claim 10, further comprising: storing the notification in an event queue until the notification can be retrieved by the administration server.
  • 16. A method according to claim 10, further comprising: using a JMX MBean to allow a user to modify the deployment.
  • 17. A method for providing high-availability deployment of application view components, comprising: selecting an administration server in a cluster of servers, the administration server having a Java Management Extension (JMX) server that manages application view component deployment across the cluster, the application view component integrating one or more applications;ensuring that a redundant JMX server exists on at least one managed server in the cluster of servers, wherein the redundant JMX server takes over managing application;view component deployment for the administration server's JMX server in the event of an administration server failure;receiving an application view component deployment request from a client by the administration server;selecting at least one server in the cluster by the administration server and instructing the selected server to handle the application, view component deployment request; transmitting a notification by the selected server to other servers in the cluster upon having completed the application view component deployment request;deploying a set of beans to represent the state of the application view component deployment, said beans including: a deployment bean that represents the deployment of the application view component; a runtime bean that represents runtime state of the application view component within a node in the cluster; anda summary bean that aggregates statistics from the runtime beans deployed in all the nodes in the cluster;maintaining an application view deployment cache on one or more nodes in the cluster; adding an entry to the application view deployment cache upon instantiating the runtime bean:migrating management of the application view component deployment from the administration server to the redundant JMX server in the cluster in the event of an administration server failure; andnotifying the administration server of the application view component deployment that has occurred during the administration server failure, said notifying being performed after the administration server has become available after said failure.
  • 18. A computer system comprising: a processor;object code executed by said processor, said object code configured to:select an administration server in a cluster of servers, the administration server having a Java Management Extension (JMX) server that manages application view component deployment across the cluster, the application view component integrating one or more applications;ensure that a redundant JMX server exists on at least one managed server in the cluster of servers, wherein the redundant JMX server takes over managing application view component deployment for the administration server's JMX server in the event of an administration server failure;receive an application view component deployment request from a client by the administration server;select at least one server in the cluster by the administration server and instructing the selected server to handle the application view component deployment request;transmit a notification by the selected server to other servers in the cluster upon having completed the application view component deployment request;deploy a set of beans to represent the state of the application view component deployment, said beans including a deployment bean that represents the deployment of the application view component; a runtime bean that represents runtime state of the application view component within a node in the cluster; anda summary bean that aggregates statistics from the runtime beans deployed in all the nodes in the cluster;maintain an application view deployment cache on one or more nodes in the cluster;add an entry to the application view deployment cache upon instantiating the runtime bean;migrate management of the application view component deployment from the administration server to the redundant JMX server in the cluster in the event of an administration server failure; andnotify the administration server of the application view component deployment that has occurred during the administration server failure, said notifying being performed after the administration server has become available after said failure.
  • 19. The system of claim 1, wherein the runtime bean further includes a service count attribute that tracks the total number of service invocations made on a given service for a current application view.
  • 20. The system of claim 1, wherein the runtime bean further includes an event count attribute that tracks the total number of events of a given type delivered through a current application view.
  • 21. The system of claim 1, wherein the runtime bean further includes a client count attribute that tracks the total number of application view clients that currently depend on the application view component.
CLAIM OF PRIORITY

This application claims priority to U.S. Provisional Patent Application No. 60/376,958, filed May 1, 2002, entitled “HIGH AVAILABILITY APPLICATION VIEW DEPLOYMENT,” which is hereby incorporated herein by reference.

US Referenced Citations (262)
Number Name Date Kind
5283897 Georgiadis et al. Feb 1994 A
5321841 East et al. Jun 1994 A
5469562 Saether Nov 1995 A
5544318 Schmitz et al. Aug 1996 A
5604860 McLaughlin et al. Feb 1997 A
5630131 Palevich et al. May 1997 A
5748975 Van De Vanter May 1998 A
5801958 Dangelo et al. Sep 1998 A
5828847 Gehr et al. Oct 1998 A
5835769 Jervis et al. Nov 1998 A
5836014 Faiman, Jr. Nov 1998 A
5862327 Kwang et al. Jan 1999 A
5892913 Adiga et al. Apr 1999 A
5933838 Lomet Aug 1999 A
5944794 Okamoto et al. Aug 1999 A
5950010 Hesse et al. Sep 1999 A
5951694 Choquier et al. Sep 1999 A
5961593 Gabber et al. Oct 1999 A
5966535 Benedikt et al. Oct 1999 A
6012083 Savitzky et al. Jan 2000 A
6012094 Leymann et al. Jan 2000 A
6016495 McKeehan et al. Jan 2000 A
6018730 Nichols et al. Jan 2000 A
6021443 Bracho et al. Feb 2000 A
6023578 Birsan et al. Feb 2000 A
6023722 Colyer Feb 2000 A
6028997 Leymann et al. Feb 2000 A
6029000 Woolsey et al. Feb 2000 A
6044217 Brealey et al. Mar 2000 A
6061721 Ismael et al. May 2000 A
6067623 Blakley, III et al. May 2000 A
6070184 Blount et al. May 2000 A
6078943 Yu Jun 2000 A
6081840 Zhao Jun 2000 A
6085030 Whitehead et al. Jul 2000 A
6119143 Dias et al. Sep 2000 A
6119149 Notani Sep 2000 A
6128279 O'Neil et al. Oct 2000 A
6131118 Stupek, Jr. et al. Oct 2000 A
6141686 Jackowski et al. Oct 2000 A
6141701 Whitney Oct 2000 A
6148336 Thomas et al. Nov 2000 A
6154738 Call Nov 2000 A
6154769 Cherkasova et al. Nov 2000 A
6185734 Saboff et al. Feb 2001 B1
6189044 Thomson et al. Feb 2001 B1
6195680 Goldszmidt et al. Feb 2001 B1
6212546 Starkovich et al. Apr 2001 B1
6222533 Notani et al. Apr 2001 B1
6226666 Chang et al. May 2001 B1
6226675 Meltzer et al. May 2001 B1
6226788 Schoening et al. May 2001 B1
6230160 Chan et al. May 2001 B1
6230287 Pinard et al. May 2001 B1
6230309 Turner et al. May 2001 B1
6233607 Taylor et al. May 2001 B1
6237135 Timbol May 2001 B1
6243737 Flanagan et al. Jun 2001 B1
6253230 Couland et al. Jun 2001 B1
6269373 Apte et al. Jul 2001 B1
6282711 Halpern et al. Aug 2001 B1
6292830 Taylor et al. Sep 2001 B1
6292932 Baisley et al. Sep 2001 B1
6311327 O'Brien et al. Oct 2001 B1
6317786 Yamane et al. Nov 2001 B1
6324681 Sebesta et al. Nov 2001 B1
6330569 Baisley et al. Dec 2001 B1
6334114 Jacobs et al. Dec 2001 B1
6336122 Lee et al. Jan 2002 B1
6338064 Ault et al. Jan 2002 B1
6343265 Glebov et al. Jan 2002 B1
6345283 Anderson Feb 2002 B1
6348970 Marx Feb 2002 B1
6349408 Smith Feb 2002 B1
6353923 Bogel et al. Mar 2002 B1
6356906 Lippert et al. Mar 2002 B1
6360221 Gough et al. Mar 2002 B1
6360358 Elsbree et al. Mar 2002 B1
6367068 Vaidyanathan et al. Apr 2002 B1
6374297 Wolf et al. Apr 2002 B1
6377939 Young Apr 2002 B1
6393605 Loomans May 2002 B1
6408311 Baisley et al. Jun 2002 B1
6411698 Bauer et al. Jun 2002 B1
6438594 Bowman-Armuah Aug 2002 B1
6442565 Tyra et al. Aug 2002 B1
6442611 Navarre et al. Aug 2002 B1
6445711 Scheel et al. Sep 2002 B1
6463503 Jones et al. Oct 2002 B1
6470364 Prinzing Oct 2002 B1
6515967 Wei et al. Feb 2003 B1
6516322 Meredith Feb 2003 B1
6535908 Johnson et al. Mar 2003 B1
6549949 Bowman-Amuah Apr 2003 B1
6553425 Shah et al. Apr 2003 B1
6560636 Cohen et al. May 2003 B2
6560769 Moore et al. May 2003 B1
6584454 Hummel, Jr. et al. Jun 2003 B1
6594693 Borwankar Jul 2003 B1
6594700 Graham et al. Jul 2003 B1
6601113 Koistinen et al. Jul 2003 B1
6604198 Beckman et al. Aug 2003 B1
6609115 Mehring et al. Aug 2003 B1
6615258 Barry et al. Sep 2003 B1
6622168 Datta Sep 2003 B1
6636491 Kari et al. Oct 2003 B1
6637020 Hammond Oct 2003 B1
6643652 Helgeson et al. Nov 2003 B2
6654932 Bahrs et al. Nov 2003 B1
6662357 Bowman-Amuah Dec 2003 B1
6678518 Eerola Jan 2004 B2
6684387 Acker et al. Jan 2004 B1
6684388 Gupta et al. Jan 2004 B1
6687702 Vaitheeswaran et al. Feb 2004 B2
6687848 Najmi Feb 2004 B1
6697849 Carlson Feb 2004 B1
6721740 Skinner et al. Apr 2004 B1
6721747 Lipkin Apr 2004 B2
6721779 Maffeis Apr 2004 B1
6732237 Jacobs et al. May 2004 B1
6748420 Quatrano et al. Jun 2004 B1
6754884 Lucas et al. Jun 2004 B1
6782416 Cochran et al. Aug 2004 B2
6789054 Makhlouf Sep 2004 B1
6795967 Evans et al. Sep 2004 B1
6799718 Chan et al. Oct 2004 B2
6802000 Greene et al. Oct 2004 B1
6804686 Stone et al. Oct 2004 B1
6832238 Sharma et al. Dec 2004 B1
6836883 Abrams et al. Dec 2004 B1
6847981 Song et al. Jan 2005 B2
6850979 Saulpaugh et al. Feb 2005 B1
6857012 Sim et al. Feb 2005 B2
6859834 Arora et al. Feb 2005 B1
6874143 Murray et al. Mar 2005 B1
6889244 Gaither et al. May 2005 B1
6910041 Exton et al. Jun 2005 B2
6915519 Williamson et al. Jul 2005 B2
6918084 Slaughter et al. Jul 2005 B1
6922827 Vasilik et al. Jul 2005 B2
6925482 Gopal Aug 2005 B2
6925492 Shirriff Aug 2005 B2
6950825 Chang et al. Sep 2005 B2
6950872 Todd, II Sep 2005 B2
6963914 Breitbart et al. Nov 2005 B1
6970939 Sim Nov 2005 B2
6971096 Ankireddipally et al. Nov 2005 B1
6976086 Sadeghi et al. Dec 2005 B2
6983328 Beged-Dov et al. Jan 2006 B2
6993743 Crupi et al. Jan 2006 B2
7000219 Barrett et al. Feb 2006 B2
7017146 Dellarocas Mar 2006 B2
7051072 Stewart et al. May 2006 B2
7051316 Charisius et al. May 2006 B2
7054858 Sutherland May 2006 B2
7058637 Britton et al. Jun 2006 B2
7062718 Kodosky et al. Jun 2006 B2
7069507 Alcazar et al. Jun 2006 B1
7072934 Helgeson et al. Jul 2006 B2
7073167 Iwashita Jul 2006 B2
7080092 Upton Jul 2006 B2
7089568 Yoshida et al. Aug 2006 B2
7089584 Sharma Aug 2006 B1
7107578 Alpern Sep 2006 B1
7111243 Ballard et al. Sep 2006 B1
7117504 Smith et al. Oct 2006 B2
7127704 Van De Vanter et al. Oct 2006 B2
7143186 Stewart et al. Nov 2006 B2
7146422 Marlatt et al. Dec 2006 B1
7150015 Pace Dec 2006 B2
7155705 Hershberg et al. Dec 2006 B1
7159007 Stawikowski Jan 2007 B2
7165041 Guheen et al. Jan 2007 B1
7181731 Pace et al. Feb 2007 B2
7184967 Mital et al. Feb 2007 B1
7240331 Vion-Dury et al. Jul 2007 B2
20020004848 Sudarshan et al. Jan 2002 A1
20020010781 Tuatini Jan 2002 A1
20020010803 Oberstein et al. Jan 2002 A1
20020016759 Marcready et al. Feb 2002 A1
20020026630 Schmidt et al. Feb 2002 A1
20020049788 Lipkin et al. Apr 2002 A1
20020078174 Sim et al. Jun 2002 A1
20020078365 Burnette et al. Jun 2002 A1
20020083075 Brummel et al. Jun 2002 A1
20020083118 Sim Jun 2002 A1
20020083187 Sim et al. Jun 2002 A1
20020111820 Massey Aug 2002 A1
20020111922 Young et al. Aug 2002 A1
20020112069 Sim Aug 2002 A1
20020116454 Dyla et al. Aug 2002 A1
20020120685 Srivastava et al. Aug 2002 A1
20020120786 Sehayek et al. Aug 2002 A1
20020133491 Sim et al. Sep 2002 A1
20020143960 Goren et al. Oct 2002 A1
20020152106 Stoxen et al. Oct 2002 A1
20020161826 Arteaga et al. Oct 2002 A1
20020165936 Alston et al. Nov 2002 A1
20020169644 Greene Nov 2002 A1
20020184145 Sijacic et al. Dec 2002 A1
20020184610 Chong et al. Dec 2002 A1
20020188486 Gil et al. Dec 2002 A1
20020194244 Raventos Dec 2002 A1
20020194267 Flesner et al. Dec 2002 A1
20020194495 Gladstone et al. Dec 2002 A1
20020198800 Shamrakov Dec 2002 A1
20030004746 Kheirolomoom et al. Jan 2003 A1
20030005181 Bau, III et al. Jan 2003 A1
20030014439 Boughannam Jan 2003 A1
20030018665 Dovin et al. Jan 2003 A1
20030018832 Amirisetty et al. Jan 2003 A1
20030018963 Ashworth et al. Jan 2003 A1
20030023957 Bau et al. Jan 2003 A1
20030026254 Sim Feb 2003 A1
20030028579 Kulkarni et al. Feb 2003 A1
20030031176 Sim Feb 2003 A1
20030033437 Fischer et al. Feb 2003 A1
20030043191 Tinsley et al. Mar 2003 A1
20030046266 Mullins et al. Mar 2003 A1
20030046369 Sim et al. Mar 2003 A1
20030046591 Asghari-Kamrani et al. Mar 2003 A1
20030055868 Fletcher et al. Mar 2003 A1
20030055878 Fletcher et al. Mar 2003 A1
20030061405 Fisher et al. Mar 2003 A1
20030074217 Beisiegel et al. Apr 2003 A1
20030074467 Oblak et al. Apr 2003 A1
20030079029 Garimella et al. Apr 2003 A1
20030093402 Upton May 2003 A1
20030093403 Upton May 2003 A1
20030093470 Upton May 2003 A1
20030093471 Upton May 2003 A1
20030097345 Upton May 2003 A1
20030097574 Upton May 2003 A1
20030105884 Upton Jun 2003 A1
20030110117 Saidenberg et al. Jun 2003 A1
20030110315 Upton Jun 2003 A1
20030110446 Nemer Jun 2003 A1
20030126136 Omoigui Jul 2003 A1
20030145047 Upton Jul 2003 A1
20030149791 Kane et al. Aug 2003 A1
20030167358 Marvin et al. Sep 2003 A1
20030182452 Upton Sep 2003 A1
20030196168 Hu Oct 2003 A1
20030212834 Potter Nov 2003 A1
20030220967 Potter Nov 2003 A1
20030233631 Curry Dec 2003 A1
20040015368 Potter et al. Jan 2004 A1
20040019645 Goodman et al. Jan 2004 A1
20040040011 Bosworth et al. Feb 2004 A1
20040068568 Griffin Apr 2004 A1
20040078373 Ghoneimy et al. Apr 2004 A1
20040078440 Potter Apr 2004 A1
20040133660 Junghuber et al. Jul 2004 A1
20040148336 Hubbard et al. Jul 2004 A1
20040204976 Oyama et al. Oct 2004 A1
20040216086 Bau Oct 2004 A1
20040225995 Marvin et al. Nov 2004 A1
20040260715 Mongeon et al. Dec 2004 A1
20050033663 Narin et al. Feb 2005 A1
20050223392 Cox et al. Oct 2005 A1
20060234678 Juitt et al. Oct 2006 A1
20070038500 Hammitt et al. Feb 2007 A1
Foreign Referenced Citations (5)
Number Date Country
2 248 634 Mar 2000 CA
1 006 443 Jun 2000 EP
1 061 445 Dec 2000 EP
0029924 May 2000 WO
WO 0190884 Nov 2001 WO
Related Publications (1)
Number Date Country
20030220967 A1 Nov 2003 US
Provisional Applications (1)
Number Date Country
60376958 May 2002 US