N/A
As computerized systems have increased in popularity, so have the complexity of the software and hardware employed within such systems. In general, the need for seemingly more complex software continues to grow, which further tends to be one of the forces that push greater development of hardware. For example, if application programs require too much of a given hardware system, the hardware system can operate inefficiently, or otherwise be unable to process the application program at all. Recent trends in application program development, however, have removed many of these types of hardware constraints at least in part using distributed application programs. In general, distributed application programs comprise components that are executed over several different hardware components, often on different computer systems in a tiered environment.
With distributed application programs, the different computer systems may communicate various processing results to each other over a network. Along these lines, an organization will employ a distributed application server to manage several different distributed application programs over many different computer systems. For example, a user might employ one distributed application server to manage the operations of an ecommerce application program that is executed on one set of different computer systems. The user might also use the distributed application server to manage execution of customer management application programs on the same or even a different set of computer systems.
Of course, each corresponding distributed application managed through the distributed application server can, in turn, have several different modules and components that are executed on still other different computer systems. One can appreciate, therefore, that while this ability to combine processing power through several different computer systems can be an advantage, there are other disadvantages to such a wide distribution of application program modules. For example, organizations typically expect a distributed application server to run distributed applications optimally on the available resources, and take into account changing demand patterns and resource availability.
Unfortunately, conventional distributed application servers are typically ill-equipped (or not equipped at all) to automatically handle and manage all of the different problems that can occur for each given module of a distributed application program. For example, a user may have an online store application program that is routinely swamped with orders whenever there is a promotion, or during the same holidays each year. In some cases, the user might expect the distributed application server to analyze and anticipate these fluctuating demands on various components or modules of the given distributed application program.
In particular, the organization might expect the distributed application server to swap around various resources so that high-demand processes can be handled by software and hardware components on other systems that may be less busy. Such accommodations, however, can be difficult if not impossible to do with conventional distributed application server platforms. Specifically, most conventional distributed application server platforms are ill-equipped or otherwise unable to identify and properly manage different demand patterns between components of a distributed application program. This may be due at least partly to the complexity in managing application programs that can have many distributed components and subsystems, many of which are long-running workflows, and/or otherwise legacy or external systems.
In addition, conventional distributed application program servers are generally not configured for efficient scalability. For example, most distributed application servers are configured to manage precise instructions of the given distributed application program, such as precise reference and/or component addressing schemes. That is, there is often little or no “loose coupling” between components of an application program. Thus, when an administrator of the server desires to redeploy certain modules or components onto another server or set of computer systems, there is an enhanced potential of errors particularly where a large number of different computer systems and/or modules may be involved. This potential for errors can be realized when some of the new module or component references are not passed onward everywhere they are needed, or if they are passed onward incorrectly.
One aspect of distributed application programs that can further enhance this potential for error is the notion that the distributed application server may be managing several different distributed application programs, each of which executes on a different platform. That is, the distributed application server may need to translate different instructions for each different platform before the corresponding distributed application program may be able to accept and implement the change. Due to these and other complications, distributed application programs tend to be fairly sensitive to demand spikes.
This sensitivity to demand spikes can mean that various distributed application program modules may continue to operate at a sub-optimum level for a long period of time before the error can be detected. In some cases, the administrator for the distributed application server may not even take corrective action since attempting to do so could result in an even greater number of errors. As a result, a distributed application program module could potentially become stuck in a pattern of inefficient operation, such as continually rebooting itself, without ever getting corrected during the lifetime of the distributed application program. Accordingly, there are a number of difficulties with management of current distributed application programs and distributed application program servers that can be addressed.
Implementations of the present invention provide systems, methods, and computer program products configured to automatically implement operations of distributed application programs through a distributed application program server. In at least one implementation, for example, a distributed application program server comprises a set of implementation means and a set of analytics means. Through a platform-specific driver for each given module of a distributed application program, the implementation means deploy sets of high-level instructions, or declarative models, to create a given distributed application program module on the respective platform, while the analytics means automatically monitor and adjust the declarative models, as needed. This loose coupling through the declarative models of server components to the distributed application program and automatic monitoring and adjustment can allow the server to better manage demand, resource, or usage spikes, and/or other forms of distributed application program behavior fluctuations.
Accordingly, a method of automatically implementing one or more sets of high-level instructions in a distributed application program during execution using declarative models can involve identifying one or more modifications to corresponding one or more declarative models in a repository. The one or more declarative models include high-level instructions regarding one or more operations of a distributed application program. The method can also involve refining the one or more declarative models to include contextual information regarding operations of the distributed application program. In addition, the method can involve translating the one or more refined declarative models into one or more commands to be implemented by the container of the distributed application program. Furthermore, the method can involve sending the translated commands to one or more application containers. The translated commands are then received by the container and used to determine and configure behavior of the distributed application program in that container.
In addition, an additional or alternative method of automatically implementing one or more sets of high-level instructions in a distributed application program during execution using declarative models can involve receiving a set of new one or more declarative models from a repository. The new one or more declarative models include high-level instructions regarding operations of a distributed application program. The method can also involve implementing the new one or more new declarative models through an implementation means and one or more application containers. As a result, a first set of low-level commands are prepared and sent to the one or more application containers to be executed.
In addition, the method can involve identifying a change in the new one or more declarative models via one or more analytics means. The change reflects performance information for the distributed application program that is received from the one or more application containers. Furthermore, the method can involve implementing an updated version of the one or more declarative models through the implementation means and the one or more application containers. As such, a second set of low-level commands are prepared and sent to the one or more application containers to be executed based on the changes to the one or more new declarative models.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention. The features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.
In order to describe the manner in which the above-recited and other advantages and features of the invention can be obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
Implementations of the present invention extend to systems, methods, and computer program products configured to automatically implement operations of distributed application programs through a distributed application program server. In at least one implementation, for example, a distributed application program server comprises a set of implementation means and a set of analytics means. Through a platform-specific driver for each given module of a distributed application program, the implementation means deploy sets of high-level instructions, or declarative models, to create a given distributed application program module on the respective platform, while the analytics means automatically monitor and adjust the declarative models, as needed. This loose coupling through the declarative models of server components to the distributed application program and automatic monitoring and adjustment can allow the server to better manage demand, resource, or usage spikes, and/or other forms of distributed application program behavior fluctuations.
Accordingly, one will appreciate from the following specification and claims that implementations of the present invention can provide a number of different advantages to managing distributed application programs. This is at least partly due to the ease of implementing high-level instructions, such as those created by a program developer, as low-level instructions (e.g., executable commands) that can be executed by distributed application containers that configure and manage distributed application modules on a platform-specific basis. For example, implementations of the present invention provide mechanisms for writing a declarative model, detecting changes to a declarative model, and scheduling an appropriate model refinement process so that refined declarative model instructions can be translated.
Further implementations provide mechanisms for translating the refined model into instructions/commands that are ultimately executed. Accordingly, one will appreciate that these and other features can significantly ease and normalize management of a distributed application program server managing one or multiple different distributed application programs, potentially on several different platforms. In particular, the server administrator can easily configure a wide range of distributed application operations without necessarily needing to understand all the configuration particulars of the given run-time environments, and/or the specific implementation platforms of the given distributed application program.
Referring now to the Figures,
For example,
In any event, and as previously mentioned, declarative models 153 include one or more sets of high-level instructions regarding operations of a particular distributed application program 107. These high-level instructions generally describe a particular intent for operation/behavior of one or more modules in the distributed application program, but do not necessarily describe steps required to implement the particular operations/behaviors. For example, a declarative model 153 can include such information as on what computer systems a particular module should run, as well as the characteristics of a computer system that should be allowed to run the particular module (e.g., processing speed, storage capacity, etc.).
Although the declarative model 153 could ultimately include such specific information as the Uniform Resource Identifier (URI) address of a particular endpoint, the initial creation of any declarative model (e.g., 153) will usually result in a document which will more likely include generalized information. Such generalized information might include a domain name where a module can be executed, different permissions sets that can be associated with execution of the module, whether or not certain components should connect at all, etc. For example, a declarative model 153 may describe the intent of having one web service connect to another web service.
When ultimately interpreted and/or translated, these generalized intent instructions can result in very specific instructions/commands, depending on the platform or operating environment. For example, the declarative model 153 could include instructions so that, when interpreted, a web service deployed into one datacenter may be configured to use a TCP transport if one other web service is nearby. The instructions could also include instructions that tell the deployed web service to alternatively use an Internet relay connection if the other web service is outside of the firewall (i.e., not nearby).
Although indicating a preference for connection of some sort, the declarative model (e.g., a “declarative application model”) (153) will typically leave the choice of connection protocol to a model interpreter. In particular, a declarative model creator (e.g., tools component 125) might indicate a preference for connections in the declarative model 153 generally, while the declarative model interpreter (e.g., executive component 115 and/or platform-specific driver 130) can be configured to select different communication transports depending on where specific modules are deployed. For example, the model interpreter (e.g., executive component 115 and/or platform-specific driver 130) may prepare more specific instructions to differentiate the connection between modules when on the same machine, in a cluster, or connected over the Internet.
Similarly, another declarative model (e.g., a “declarative policy model”) (153) may describe operational features based more on end use policies. For example, a declarative policy model used with a distributed financial application program may dictate that no more than 100 trade requests in a second may be sent over a connection to a brokerage firm. A policy model interpreter (e.g., executive component 115 and/or platform-specific driver 130), however, can be configured to choose an appropriate strategy, such as queuing excessive requests to implement the described intent.
In any case,
In either case, executive component 115 ultimately identifies, receives and refines the declarative models 153 (and/or changes thereto) in repository 120 so that they can be translated by the platform-specific driver 130. In general, “refining” a declarative model 153 includes adding or modifying any of the information contained in a declarative model so that the declarative model instructions are sufficiently complete for translation by platform-specific driver 130. Since the declarative models 153 can be written relatively loosely by a human user (i.e., containing generalized intent instructions or requests), there may be different degrees or extents to which an executive component will need to modify or supplement a declarative model.
Along these lines,
For example,
Upon detecting any changes (whether new declarative models or updates thereto), executive component 115 then begins the process of progressive elaboration on any such identified declarative model (or modification). In general, progressive elaboration involves refining a particular declarative model 153 (i.e., adding or modifying data) until there are no ambiguities, and until details are sufficient for the platform-specific drivers 130 to consume/translate them. The executive component 115 performs progressive elaboration at least in part using refining component 119, which “refines” the declarative model 153 data.
In at least one implementation, executive component 115 implements this progressive elaboration or “refining” process as a workflow that uses a set of activities from a particular library (not shown). In one implementation, the executive component 115 also provides the library in advance, and specifically for the purposes of working on declarative models. Some example activities that might be used in this particular workflow can include “read model data,” “write model data,” “find driver,” “call driver,” or the like. The actions associated with these or other types of calls are described more fully below as implemented by the refining component 119 portion of executive component 115.
Specifically, in at least one implementation, the refining component 119 refines a declarative model 153 (or update thereto). The refining component 119 typically refines a declarative model 153 by adding information based on knowledge of dependencies (and corresponding semantics) between elements in the declarative model 153 (e.g. one web service connected to another). The refining component 119 can also refine the declarative model 153 by adding some forms of contextual awareness, such as by adding information about the available inventory of application containers 135 for deploying a distributed application program 107. In addition, the refining component 119 can be configured to fill-in missing data regarding computer system assignments.
For example, refining component 119 might identify a number of different modules that will be used to implement a declarative model 153, where the two modules have no requirement for specific computer system addresses or operating requirements. The refining component 119 might thus assign the distributed application program 107 modules to available computer systems arranged by appropriate distributed application program containers 135, and correspondingly record that machine information in the refined declarative model 153a (or segment thereof). Along these lines, the refining component 119 can reason about the best way to fill-in data in a refined declarative model 153. For example, as previously described, refining component 119 of executive component 115 may determine and decide which transport to use for an endpoint based on proximity of connection, or determine and decide how to allocate distributed application program modules based on factors appropriate for handling expected spikes in demand.
In additional or alternative implementations, the refining component 119 can compute dependent data in the declarative model 153. For example, the refining component 119 may compute dependent data based on an assignment of distributed application program modules to machines. Along these lines, the refining component 119 may also calculate URI addresses on the endpoints, and propagate the corresponding URI addresses from provider endpoints to consumer endpoints. In addition, the refining component 119 may evaluate constraints in the declarative model 153. For example, the refining component 119 can be configured to check to see if two distributed application program modules can actually be assigned to the same machine, and if not, the refining component 119 can refine the declarative model 153a to correct it.
After adding all appropriate data to (or otherwise modifying/refining) the given declarative model 153 (to create model 153a), the refining component 119 can finalize the refined declarative model 153a so that it can be translated by platform-specific drivers 130. To finalize or complete the refined declarative model 153a, refining component 119 might, for example, partition declarative model 153 into segments that can be targeted by any one or more platform-specific drivers 130. To this end, the refining component 119 might tag each declarative model 153a (or segment thereof) with its target driver (e.g., the address of platform-specific driver 130). Furthermore, the refining component 119 can verify that the declarative model 153a can actually be translated by the platform-specific drivers 130, and, if so, pass the refined declarative model 153a (or segment thereof) to the particular platform-specific driver 130 for translation.
In any case,
Whatever actions performed by the translation component 131 will be tailored for the specific platform or operating environment. In particular, the platform-specific driver (e.g., via translation component 131) can translate the refined declarative models according to in-depth, platform-specific configuration knowledge of a given platform/operating environment corresponding to the one or more application containers 135 (e.g., version of the operating system they run under) and container implementation technologies. With respect to a MICROSOFT WINDOWS operating environment, for example, some container implementation technologies might include “IIS”—Internet Information Service, or a WINDOWS ACTIVATION SERVICE used to host a “WCF”—WINDOWS Communication Foundation—service module). (As previously mentioned, however, any specific reference to any WINDOWS or MICROSOFT components, modules, platforms, or programs is by way only of example.)
As a result, the generalized or supplemented instructions placed into the declarative models by the tools component 125 and/or refining component 119 ultimately direct operational reality of one or more distributed application programs 107 in one or more application containers 135. In particular, the one or more distributed application containers 135 execute the declarative models 153 by executing the instructions/commands 133 received from the platform-specific driver 130. To this end, the distributed application containers 135 might replace or update any prior modules have been replaced or revised with a new declarative model 153. In addition, the distributed application containers 135 execute the most recent version of modules and/or components, such as normally done, including those described in the new instructions/commands 133, and on any number of different computer systems.
In addition to the foregoing, the distributed application programs 107 can provide various operational information about execution and performance back through the implementation means 105. For example, implementations of the present invention provide for the distributed application program 107 to send back one or more event streams 137 regarding various execution or performance indicators back through platform-specific driver 130. In one implementation, the distributed application program 107 may send out the event streams 137 on a continuous, ongoing basis, while, in other implementations, the distributed application program 107 sends the event streams on a scheduled basis (e.g., based on a scheduled request from driver 130). The platform-specific drivers 130, in turn, pass the one or more event streams 137 to analytics means 110 for analysis, tuning, and/or other appropriate modifications.
In particular, and as will be understood more fully herein, the analytics means 110 aggregate, correlate, and otherwise filter the relevant data to identify interesting trends and behaviors of the various distributed application programs 107. The analytics means 110 can also modify corresponding declarative models 153 as appropriate for the identified trends. For example, the analytics means 110 may modify declarative models 153 to create a new or otherwise modified declarative model 153b that reflects a change in intent, such as to overcome a problem identified in event streams 137. In particular, the modified declarative model 153b might be configured so that a given module of a distributed application program can be redeployed on another machine if the currently assigned machine is rebooting too frequently.
The modified declarative model 153b is then passed back into repository 120. As previously mentioned, executive component 115 will identify the new declarative model 153b (or modification to a prior declarative model 153) and begin the corresponding refining process. Specifically, executive component will use refining component 119 to add any necessary data to modified declarative model 153b to create refined, modified declarative model, such as previously described. The newly refined, albeit modified declarative model 153b is then passed to platform-specific driver 130, where it is translated and passed to the appropriate application containers 135 for processing.
Accordingly,
In addition to the foregoing, implementations of the present invention can also be described in terms of one or more flow charts of methods having a series of acts and/or steps for accomplishing a particular result. For example,
For example,
In addition,
Furthermore,
In addition to the foregoing,
In addition,
Furthermore,
Accordingly,
The embodiments of the present invention may comprise a special purpose or general-purpose computer including various computer hardware, as discussed in greater detail below. Embodiments within the scope of the present invention also include computer-readable media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer.
By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a computer-readable medium. Thus, any such connection is properly termed a computer-readable medium. Combinations of the above should also be included within the scope of computer-readable media.
Computer-executable instructions comprise, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Number | Name | Date | Kind |
---|---|---|---|
4751635 | Kret | Jun 1988 | A |
5423003 | Berteau | Jun 1995 | A |
5602991 | Berteau | Feb 1997 | A |
5655081 | Bonnell | Aug 1997 | A |
5764241 | Elliott | Jun 1998 | A |
5809266 | Touma | Sep 1998 | A |
5893083 | Eshghi | Apr 1999 | A |
5913062 | Vrvilo et al. | Jun 1999 | A |
5937388 | Davis et al. | Aug 1999 | A |
5958010 | Agarwal | Sep 1999 | A |
6005849 | Roach et al. | Dec 1999 | A |
6026404 | Adunuthula | Feb 2000 | A |
6055363 | Beals et al. | Apr 2000 | A |
6070190 | Reps | May 2000 | A |
6167538 | Neufeld et al. | Dec 2000 | A |
6185601 | Wolff | Feb 2001 | B1 |
6225995 | Jacobs | May 2001 | B1 |
6230309 | Turner | May 2001 | B1 |
6247056 | Chou | Jun 2001 | B1 |
6263339 | Hirsch | Jul 2001 | B1 |
6279009 | Smirnov et al. | Aug 2001 | B1 |
6327622 | Jindal | Dec 2001 | B1 |
6330717 | Raverdy et al. | Dec 2001 | B1 |
6334114 | Jacobs | Dec 2001 | B1 |
6336217 | D'Anjou et al. | Jan 2002 | B1 |
6342907 | Petty | Jan 2002 | B1 |
6415297 | Leymann et al. | Jul 2002 | B1 |
6477665 | Bowman-Amuah | Nov 2002 | B1 |
6618719 | Andrei | Sep 2003 | B1 |
6640241 | Ozzie | Oct 2003 | B1 |
6654783 | Hubbard | Nov 2003 | B1 |
6662205 | Bereiter | Dec 2003 | B1 |
6697877 | Martin | Feb 2004 | B1 |
6710786 | Jacobs | Mar 2004 | B1 |
6715145 | Bowman-Amuah | Mar 2004 | B1 |
6718535 | Underwood | Apr 2004 | B1 |
6801818 | Kopcha | Oct 2004 | B2 |
6847970 | Keller et al. | Jan 2005 | B2 |
6886024 | Fujita | Apr 2005 | B1 |
6907395 | Hunt | Jun 2005 | B1 |
6931644 | Riosa | Aug 2005 | B2 |
6934702 | Faybishenko | Aug 2005 | B2 |
6941341 | Logston et al. | Sep 2005 | B2 |
7051098 | Masters | May 2006 | B2 |
7055143 | Ringseth et al. | May 2006 | B2 |
7065579 | Traversat | Jun 2006 | B2 |
7072807 | Brown et al. | Jul 2006 | B2 |
7072934 | Helgeson | Jul 2006 | B2 |
7079010 | Champlin | Jul 2006 | B2 |
7085837 | Kimbrel et al. | Aug 2006 | B2 |
7096258 | Hunt | Aug 2006 | B2 |
7103874 | McCollum | Sep 2006 | B2 |
7130881 | Volkov et al. | Oct 2006 | B2 |
7150015 | Pace et al. | Dec 2006 | B2 |
7155380 | Hunt et al. | Dec 2006 | B2 |
7155466 | Rodriguez | Dec 2006 | B2 |
7162509 | Brown et al. | Jan 2007 | B2 |
7168077 | Kim | Jan 2007 | B2 |
7174359 | Hamilton, II et al. | Feb 2007 | B1 |
7178129 | Katz | Feb 2007 | B2 |
7200530 | Brown | Apr 2007 | B2 |
7210143 | Or et al. | Apr 2007 | B2 |
7219351 | Bussler et al. | May 2007 | B2 |
7263689 | Edwards et al. | Aug 2007 | B1 |
7296268 | Darling | Nov 2007 | B2 |
7379999 | Zhou et al. | May 2008 | B1 |
7383277 | Gebhard et al. | Jun 2008 | B2 |
7395526 | Arcand | Jul 2008 | B2 |
7403956 | Vaschilo et al. | Jul 2008 | B2 |
7444618 | Kulkarni et al. | Oct 2008 | B2 |
7487080 | Tocci | Feb 2009 | B1 |
7512707 | Manapragada | Mar 2009 | B1 |
7526734 | Vasilev | Apr 2009 | B2 |
7703075 | Das | Apr 2010 | B2 |
7747985 | Campbell et al. | Jun 2010 | B2 |
7761844 | Bove | Jul 2010 | B2 |
7774744 | Moore et al. | Aug 2010 | B2 |
7796520 | Poustchi | Sep 2010 | B2 |
7797289 | Chan et al. | Sep 2010 | B2 |
7844942 | Eilam | Nov 2010 | B2 |
7890543 | Hunt et al. | Feb 2011 | B2 |
8122106 | Hunt et al. | Feb 2012 | B2 |
20020035593 | Salim et al. | Mar 2002 | A1 |
20020038217 | Young | Mar 2002 | A1 |
20020099818 | Russell | Jul 2002 | A1 |
20020111841 | Leymann et al. | Aug 2002 | A1 |
20020120917 | Abrari et al. | Aug 2002 | A1 |
20020133504 | Vlahos et al. | Sep 2002 | A1 |
20020135611 | Deosaran et al. | Sep 2002 | A1 |
20020147515 | Fava et al. | Oct 2002 | A1 |
20020198734 | Greene | Dec 2002 | A1 |
20030005411 | Gerken | Jan 2003 | A1 |
20030061342 | Abdelhadi | Mar 2003 | A1 |
20030084156 | Graupner et al. | May 2003 | A1 |
20030135384 | Nguyen | Jul 2003 | A1 |
20030149685 | Trossman | Aug 2003 | A1 |
20030182656 | Leathers | Sep 2003 | A1 |
20030195763 | Gulcu | Oct 2003 | A1 |
20030208743 | Chong | Nov 2003 | A1 |
20040034850 | Burkhardt | Feb 2004 | A1 |
20040040015 | Jordan | Feb 2004 | A1 |
20040046785 | Keller | Mar 2004 | A1 |
20040078461 | Bendich et al. | Apr 2004 | A1 |
20040088350 | Early | May 2004 | A1 |
20040102926 | Adendorff | May 2004 | A1 |
20040148184 | Sadiq | Jul 2004 | A1 |
20040162901 | Mangipudi et al. | Aug 2004 | A1 |
20040186905 | Young | Sep 2004 | A1 |
20040249972 | White | Dec 2004 | A1 |
20050005200 | Matena et al. | Jan 2005 | A1 |
20050010504 | Gebhard et al. | Jan 2005 | A1 |
20050011214 | Ratliff | Jan 2005 | A1 |
20050055692 | Lupini et al. | Mar 2005 | A1 |
20050071737 | Adendorff | Mar 2005 | A1 |
20050074003 | Ball | Apr 2005 | A1 |
20050091227 | McCollum et al. | Apr 2005 | A1 |
20050120106 | Albertao | Jun 2005 | A1 |
20050125212 | Hunt et al. | Jun 2005 | A1 |
20050132041 | Kundu | Jun 2005 | A1 |
20050137839 | Mansurov | Jun 2005 | A1 |
20050149940 | Calinescu | Jul 2005 | A1 |
20050155042 | Kolb et al. | Jul 2005 | A1 |
20050165906 | Deo et al. | Jul 2005 | A1 |
20050182750 | Krishna et al. | Aug 2005 | A1 |
20050188075 | Dias et al. | Aug 2005 | A1 |
20050216831 | Guzik | Sep 2005 | A1 |
20050228855 | Kawato | Oct 2005 | A1 |
20050246656 | Vasilev | Nov 2005 | A1 |
20050251546 | Pichetti et al. | Nov 2005 | A1 |
20050261875 | Shrivastava | Nov 2005 | A1 |
20050268307 | Gates et al. | Dec 2005 | A1 |
20050278702 | Koyfman | Dec 2005 | A1 |
20050283518 | Sargent | Dec 2005 | A1 |
20060010142 | Kim | Jan 2006 | A1 |
20060010164 | Netz | Jan 2006 | A1 |
20060013252 | Smith | Jan 2006 | A1 |
20060036743 | Deng et al. | Feb 2006 | A1 |
20060064460 | Sugawara | Mar 2006 | A1 |
20060070066 | Grobman | Mar 2006 | A1 |
20060070086 | Wang | Mar 2006 | A1 |
20060074730 | Shukla et al. | Apr 2006 | A1 |
20060074734 | Shukla et al. | Apr 2006 | A1 |
20060080352 | Boubez | Apr 2006 | A1 |
20060095443 | Kumar | May 2006 | A1 |
20060101059 | Mizote | May 2006 | A1 |
20060123389 | Kolawa et al. | Jun 2006 | A1 |
20060123412 | Hunt et al. | Jun 2006 | A1 |
20060155738 | Baldwin | Jul 2006 | A1 |
20060161862 | Racovolis et al. | Jul 2006 | A1 |
20060173906 | Chu et al. | Aug 2006 | A1 |
20060206537 | Chiang | Sep 2006 | A1 |
20060206890 | Shenfield et al. | Sep 2006 | A1 |
20060230314 | Sanjar et al. | Oct 2006 | A1 |
20060235859 | Hardwick et al. | Oct 2006 | A1 |
20060236254 | Mateescu | Oct 2006 | A1 |
20060242195 | Bove | Oct 2006 | A1 |
20060265231 | Fusaro et al. | Nov 2006 | A1 |
20060277323 | Joublin et al. | Dec 2006 | A1 |
20060277437 | Ohtsuka | Dec 2006 | A1 |
20060294502 | Das | Dec 2006 | A1 |
20060294506 | Dengler et al. | Dec 2006 | A1 |
20070005283 | Blouin | Jan 2007 | A1 |
20070005299 | Haggerty | Jan 2007 | A1 |
20070006122 | Bailey et al. | Jan 2007 | A1 |
20070016615 | Mohan et al. | Jan 2007 | A1 |
20070033088 | Aigner et al. | Feb 2007 | A1 |
20070038994 | Davis et al. | Feb 2007 | A1 |
20070050237 | Tien | Mar 2007 | A1 |
20070050483 | Bauer et al. | Mar 2007 | A1 |
20070061775 | Tanaka | Mar 2007 | A1 |
20070061776 | Ryan et al. | Mar 2007 | A1 |
20070067266 | Lomet | Mar 2007 | A1 |
20070089117 | Samson | Apr 2007 | A1 |
20070094350 | Moore | Apr 2007 | A1 |
20070112847 | Dublish | May 2007 | A1 |
20070168924 | Kirby | Jul 2007 | A1 |
20070174228 | Folting | Jul 2007 | A1 |
20070174815 | Chrysanthakopoulos et al. | Jul 2007 | A1 |
20070179823 | Bhaskaran | Aug 2007 | A1 |
20070208606 | MacKay | Sep 2007 | A1 |
20070220177 | Kothari | Sep 2007 | A1 |
20070226681 | Thorup | Sep 2007 | A1 |
20070233879 | Woods | Oct 2007 | A1 |
20070244904 | Durski | Oct 2007 | A1 |
20070245004 | Chess | Oct 2007 | A1 |
20070277109 | Chen | Nov 2007 | A1 |
20070283344 | Apte et al. | Dec 2007 | A1 |
20070288885 | Brunel et al. | Dec 2007 | A1 |
20070294364 | Mohindra et al. | Dec 2007 | A1 |
20080005729 | Harvey | Jan 2008 | A1 |
20080010631 | Harvey | Jan 2008 | A1 |
20080127052 | Rostoker | May 2008 | A1 |
20080209414 | Stein | Aug 2008 | A1 |
20080244423 | Jensen-Pistorius | Oct 2008 | A1 |
20090009662 | Manapragada | Jan 2009 | A1 |
20090049165 | Long et al. | Feb 2009 | A1 |
20090187662 | Manapragada et al. | Jul 2009 | A1 |
20090265458 | Baker | Oct 2009 | A1 |
20100005527 | Jeon | Jan 2010 | A1 |
20110179151 | Sedukhin | Jul 2011 | A1 |
20110219383 | Bhaskar | Sep 2011 | A1 |
20120042305 | Sedukhin | Feb 2012 | A1 |
Number | Date | Country |
---|---|---|
0733967 | Sep 1996 | EP |
1770510 | Apr 2007 | EP |
WO 0038091 | Jun 2000 | WO |
0124003 | Apr 2001 | WO |
WO 0227426 | Apr 2002 | WO |
2007072501 | Jun 2007 | WO |
Number | Date | Country | |
---|---|---|---|
20090006062 A1 | Jan 2009 | US |