The embodiments relate to application performance monitoring and management.
Application performance management relates to technologies and systems for monitoring and managing the performance of applications. For example, application performance management is commonly used to monitor and manage transactions performed by an application running on a server to a client.
With the advent of new technologies, the complexity of an enterprise information technology (IT) environment has been increasing. Frequent hardware and software upgrades and changes in service demands add additional uncertainty to business application performance. However, effectively gauging user satisfaction for business software applications and clearly communicating the system and application performance results between business application users and IT professionals is a challenging task.
Unfortunately, typical application performance measurement data rarely provides a clear and simple picture of how well applications are performing. Reporting several different kinds of data, such as application response time, often fails to clarify how an application is performing. Accordingly, it would be desirable to provide an improved way to assess the performance of an application.
The invention is explained in further detail, and by way of example, with reference to the accompanying drawings wherein:
Throughout the drawings, the same reference numerals indicate similar or corresponding features or functions. The drawings are included for illustrative purposes and are not intended to limit the scope of the invention.
The embodiments provide a way for assessing application performance and user satisfaction. In one embodiment, application performance is scored with an Operational Index (hereinafter “OPdex”). An OPdex is an index that quantifies user satisfaction for a range of one or more application performance metrics. The OPdex provides a way to quantify subjective user satisfaction between complete satisfaction and dissatisfaction.
In one embodiment, the user satisfaction is quantified on a scale, such as 0 to 1, 0 to 100, etc. To assist human and machine interaction, the OPdex may also be mapped to a color code, such as between red and green or vise versa, continuously on its scale.
The OPdex gauges ranges of application performance metrics using what can be referred as a hard threshold, Z, and a soft threshold, T. The hard threshold, Z, is a threshold above which values of the application metric result in unsatisfactory OPdex scores. The soft threshold, T, is a threshold above which values of the application metric result in declining OPdex scores from satisfactory.
The embodiments can examine the factors that affect the selection of a hard threshold Z from a system capacity planning perspective. In addition, after achieving a performance goal, the OPdex may indicate how further improvement may require substantial commitment in terms of resources, which is not proportional to OPdex scores. The analytical model helps the user determine whether it is cost effective to do so. The OPdex can also indicate when the relationship between the OPdex value and a required mean application response time is not linear.
In the embodiments, the OPdex employs various value functions to quantify user satisfaction for application performance metrics ranging between the hard and soft thresholds. Application performance metrics may be application response time, workload response time, transaction response time, processor utilization, memory utilization (such as amount of free memory, used memory, etc.), disk activity (such as data read and data writes), disk utilization, network traffic metrics, link utilization, and the like. For example,
An OPdex may be calculated for various application performance metrics alone or in combination. For example, an individual OPdex may be reported for each application performance metric. Alternatively, OPdex scores for an individual metric may be combined with one or more other OPdex scores for other metrics and provided as an aggregated or weighted OPdex.
In one embodiment, users may enter weights for different OPdex scores for aggregation. For example, assuming that an OPdex pi, and 0≦pi≦1. As such, a weight wi, wi>0, i=1, 2, . . . n may then be assigned to each OPdex. A single aggregated OPdex score, p, may be calculated, for example, using a weighted sum
For purposes of illustration, one embodiment of the OPdex is described with reference to application response time and user satisfaction levels for the application based on response time, r. Application response time may refer to the perceived time between a user (or software) request and the results for the request. Response time may also be known as round-trip time, reaction time, delay, or latency. In one embodiment, the application response time is from the users' perspective, which includes wait times and service times of software processes/threads at different hardware and software components supporting the application. The OPdex thus provides a single metric to reflect the application responsiveness and user experience across applications. As noted above, an OPdex can be used to quantify user satisfaction relative to performance metrics other than application response time.
In one embodiment, the OPdex quantifies user satisfaction based on application response time. The application response time, r, can be measured against a user's satisfaction level and the satisfaction level can be defined by service level objectives (SLOs). More specifically, users can define the soft threshold, T, below which users are satisfied and beyond which the users' satisfaction level decreases. User service level objectives can also be specified by an additional (optional) hard threshold, Z, beyond which the application response is too long (too late) and the application has zero value to users.
Accordingly, the measured application response time data can be divided into three mutually exclusive sets:
{x|x≦T}: a set of response times less than or equal to the soft threshold T. Let n1=|{x|x≦T}| be the cardinality of the set.
{y|T<y≦Z}: a set of response times greater than the soft threshold T and less than or equal to the hard threshold Z. Let n2=|{y|T<y≦Z}| the cardinality of the set.
{z|Z<z}: a set of response times greater than the hard threshold Z. Let n3=|{z|Z<z}| be the cardinality of the set.
In this embodiment, the OPdex is assigned a value of “1” (i.e., a full value) to the response times less than the soft threshold, T, and assigned a value of “0” (i.e., zero value) to the response times greater than the hard threshold, Z. For response times between the hard and soft threshold, the OPdex may employ value functions to quantify the effect of application response time on user satisfaction.
Accordingly, the OPdex can be expressed as:
where n1, n2, and n3 are for response times that are less than the soft threshold, T, between the soft and hard thresholds, and greater than the hard threshold, Z, respectively. f(.) is a value function and its value is less than 1.
As noted above, the OPdex is equal to “1” for response times that are less than the soft threshold and equal to “0” (zero) for response times that are greater than the hard threshold. The OPdex is equal to a value function, f(yi) for a response time yi that is between the soft and hard thresholds.
In one embodiment, the f(yi) is defined as:
where C (0≦C≦1) and n are constants that users can define for desired sensitivity of the value function. With this particular embodiment, the OPdex can be defined as:
The foregoing merely illustrates the principles of the invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements which, although not explicitly described or shown herein, embody the principles of the invention and are thus within its spirit and scope. Other system configuration and optimization features will be evident to one of ordinary skill in the art in view of this disclosure, and are included within the scope of the following claims.
In the following description, for purposes of explanation rather than limitation, specific details are set forth such as the particular architecture, interfaces, techniques, etc., in order to provide an understanding of the concepts of the invention. However, it will be apparent to those skilled in the art that the present invention may be practiced in other embodiments, which depart from these specific details.
Certain embodiments of the inventions will now be described. These embodiments are presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms. Furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the inventions. For example, for purposes of simplicity and clarity, detailed descriptions of well-known components, such as circuits, are omitted so as not to obscure the description of the present invention with unnecessary detail. To illustrate some of the embodiments, reference will now be made to the figures.
Clients 102 refer to any device requesting and accessing services of applications provided by system 100. Clients 102 may be implemented using known hardware and software. For example, clients 102 may be implemented on a personal computer, a laptop computer, a tablet computer, a smart phone, and the like. Such devices are well-known to those skilled in the art and may be employed in the embodiments.
The clients 102 may access various applications based on client software running or installed on the clients 102. The clients 102 may execute a thick client, a thin client, or hybrid client. For example, the clients 102 may access applications via a thin client, such as a browser application like Internet Explore, Firefox, etc. Programming for these thin clients may include, for example, JavaScript/AJX, JSP, ASP, PHP, Flash, Siverlight, and others. Such browsers and programming code are known to those skilled in the art.
Alternatively, the clients 102 may execute a thick client, such as a stand-alone application, installed on the clients 102. Programming for thick clients may be based on the .NET framework, Java, Visual Studio, etc.
Web server 104 provides content for the applications of system 100 over a network, such as network 124. Web server 104 may be implemented using known hardware and software to deliver application content. For example, web server 104 may deliver content via HTML pages and employ various IP protocols, such as HTTP.
Application servers 106 provide a hardware and software environment on which the applications of system 100 may execute. In some embodiments, application servers 106 may be implemented as Java Application Servers, Windows Servers implementing a .NET framework, or LINUX, UNIX, WebSphere, etc. running on known hardware platforms. Application servers 106 may be implemented on the same hardware platform as the web server 104, or as shown in
In the embodiments, application servers 106 may provide various applications, such as mail, word processors, spreadsheets, point-of-sale, multimedia, etc. Application servers 106 may perform various transaction related to requests by the clients 102. In addition, application servers 106 may interface with the database server 108 and database 110 on behalf of clients 102, implement business logic for the applications, and other functions known to those skilled in the art.
Database server 108 provides database services access to database 110 for transactions and queries requested by clients 102. Database server 108 may be implemented using known hardware and software. For example, database server 108 may be implemented based on Oracle, DB2, Ingres, SQL Server, MySQL, etc. software running on a server.
Database 110 represents the storage infrastructure for data and information requested by clients 102. Database 110 may be implemented using known hardware and software. For example, database 110 may be implemented as a relational database based on known database management systems, such as SQL, MySQL, etc. Database 110 may also comprise other types of databases, such as, object oriented databases, XML databases, and so forth.
Application performance management system 112 represents the hardware and software used for monitoring and managing the applications provided by system 100. As shown, application performance management system 112 may comprise a collector 114, a monitoring server 116, a monitoring database 118, a monitoring client 120, and agents 122. These components will now be further described.
Collector 114 collects application performance information from the components of system 100. For example, collector 114 may receive information from clients 102, web server 104, application servers 106, database server 108, and network 124. The application performance information may comprise a variety of information, such as trace files, system logs, etc. Collector 114 may be implemented using known hardware and software. For example, collector 114 may be implemented as software running on a general-purpose server. Alternatively, collector 114 may be implemented as an appliance or virtual machine running on a server.
Monitoring server 116 hosts the application performance management system. Monitoring server 116 may be implemented using known hardware and software. Monitoring server 116 may be implemented as software running on a general purpose server. Alternatively, monitoring server 116 may be implemented as an appliance or virtual machine running on a server.
Monitoring database 118 provides a storage infrastructure for storing the application performance information processed by the monitoring server 116. Monitoring database 118 may be implemented using known hardware and software.
Monitoring client 120 serves as an interface for accessing monitoring server 116. For example, monitoring client 120 may be implemented as a personal computer running an application or web browser accessing the monitoring server 120. As shown, in one embodiment, monitoring client 120 is configured to provide information indicating an OPdex score for the one or more applications running on system 100.
For example, Table 1 below illustrates a set of OPdex scores for a plurality of applications that may be running on system 100. In one embodiment, Table 1 may be displayed in the form of a web page or other type of interactive display.
As shown in the example of Table 1, the applications 1-5 may be assigned individual OPdex scores that range from 0-1. In addition, the OPdex scores for the applications may be combined, for example, averaged to provide a combined or aggregate OPdex score.
The monitoring server 116 may determine the most appropriate value function for a particular metric for a particular application. The monitoring server 116 may determine a value function based on analyzing the various application metrics and the corresponding user satisfaction reported to the monitoring server 116. This process may be performed periodically, upon request, or some other automated process.
In one embodiment, a user at monitoring client 120 may navigate through a hierarchy of OPdex scores into successive levels of details. Based on the information provided at these levels of details, a user or administrator may thus identify which of the applications is suffering from poor performance. For example, as shown above, application #4 has an OPdex score of 0.2, which may indicate a performance issue with this application. Accordingly, a user or administrator may perform various troubleshooting processes on application #2.
In one embodiment, the use of one or more OPdex scores may be used to troubleshoot an application performance problem. In particular, the collector 114 may collect information about a variety of metrics, such as latency, processor utilization, memory utilization, etc. Monitoring server 116 may then calculate respective OPdex scores for these metrics on the same scale, such as 0-1. Alternatively, monitoring server 116 may be configured to calculate OPdex scores for metrics using different scales, such as 0-1, 0-10, 0-100, and the like. In addition, the OPdex scores for the metrics may be combined into a single score or provided as a set of scores in a report via monitoring client 120.
In one embodiment, as shown, the OPdex scores for the various metrics may be ranked according to a severity or significance of their impact on the OPdex score. A user may thus navigate into successive levels of detail to investigate the underlying cause of poor application performance.
Agents 122 serve as instrumentation for the application performance management system. As shown, the agents 122 may be distributed and running on the various components of system 100. Agents 122 may be implemented as software running on the components or may be a hardware device coupled to the component. For example, agents 122 may implement monitoring instrumentation for Java and .NET framework applications. In one embodiment, the agents 122 implement, among other things, tracing of method calls for various transactions. In particular, in some embodiments, agents 122 may interface known tracing configurations provided by Java and the .NET framework to enable tracing continuously and to modulate the level of detail of the tracing. The agents 122 may also comprise various monitoring probes for other application metrics and data. For example, agents 122 may record network traffic, for example, to permit packet tracing. Any form of application information and performance data may be collected by system 100 and agents 122.
Network 124 serves as a communications infrastructure for the system 100. Network 124 may comprise various known network elements, such as routers, firewalls, hubs, switches, etc. In the embodiments, network 124 may support various communications protocols, such as TCP/IP. Network 124 may refer to any scale of network, such as a local area network, a metropolitan area network, a wide area network, the Internet, etc.
A user or monitoring server 116 may use parameters n and C to set targeted performance values at some points after the soft threshold is exceeded and before the hard threshold is reached. For example, a user can specify that when the metric value exceeds the soft threshold by one third of the distance between soft and hard thresholds (Z−T), the performance value drops to one-fourth (¼). In addition, the user can also make another performance value specification: e.g., when the metric value exceeds the soft threshold by two-thirds (⅔) of the distance between soft and hard thresholds (Z−T), the performance value drops to one-thirty-second ( 1/32). In this example, the following simultaneous equations result:
The monitoring server 116 may be configured to solve the above simultaneous equations and determine the values for n and C of OPdex(T,Z,n,C). In this particular example, the solution is n=3 and C=27/32.
As another example, a user may specify that the performance value of OPdex drops to one-half (½) for both of the response times,
between soft and hard thresholds, i.e., between Z and T.
These parameters thus result in the following simultaneous equations:
By solving the above simultaneous equations, the monitoring server 116 can determine the values for n and C of OPdex(T,Z,n,C). In this example, the solution is n=0 and C=½. Therefore, the OPdex can be expressed as:
The OPdex can have other forms of representations depending on the user's focus and interest in performance modeling. For example, the mean response time can be represented by the soft threshold and OPdex value by solving for mean response time
X>0, where X is the positive real root of 2{1−OPdex}=X+Xm,
assuming the hard threshold Z=mT, where m is an integer.
As another example, the hard threshold Z can be expressed as:
under the condition
That is, the mean application response time
The upper bound of the mean response time is a numerical value beyond which the desired OPdex goal cannot be achieved regardless of how large the hard threshold Z is defined. The lower bound for the mean response time is also a numerical value. If the mean response time is smaller than the lower bound, the hard threshold Z could be defined to be smaller than the soft threshold T, which is not possible in a real application or system. Therefore, given an application response time soft threshold T and an OPdex value, the embodiments may indicate how to provision the system so that the average application response time is plausible. For example, it can be seen that
When
then
Z→T−
As shown above, when mean response time approaches the upper bound for a given soft threshold T and an OPdex value, the hard threshold Z becomes unbound. In other words, after a certain point, the relationship between mean response time and the OPdex value becomes non-sensitive to the value of the hard threshold. Thus, it does not make practical sense to select a large hard threshold for achieving a desired OPdex value.
Similarly, it does not make sense to make the mean response time smaller than the lower bound. In that case, the hard threshold Z will be the same as or smaller than the soft threshold T:
When
then
Z→T−
Accordingly, a lower average response time means a faster system with higher costs. The analytical results presented by the embodiments could thus help IT professionals to achieve desired application performance goal without over-provisioning.
In this embodiment, the OPdex model can be used for comparing different value functions used by OPdex and can assist in troubleshooting a system or application under test. For example, various troubleshooting functions can be performed based on the relationship between the OPdex value and application/system performance metrics, such as application response times, soft and hard thresholds, service rate, and utilization of the system that supports the applications.
In one embodiment, the system or application under test is assumed to have exponential service time distribution and that the transaction inter-arrival time is also exponentially distributed. Thus, the application response time is a continuous random variable and can be measured. In a steady-state, the probability density function (pdf) of the response time can be modeled as:
μ(1−ρ)e−μ(1−ρ)y, where μ is the system service rate (i.e., 1/μ is the service time), ρ=λ/μ is the system utilization, and λ is the throughput of the system. For an unsaturated system, the throughput is equal to the arrival rate.
The OPdex is assigned a value for “1” (i.e., the maximum value) for the response time that is less than or equal to the soft threshold T. Therefore, the expected value with a weight of 1 can be computed as
Similarly, the expected value with a weight f(y) for the response times greater than the soft threshold T and less than or equal to the hard threshold Z can be computed as
is the value function for the response times between soft and hard threshold, i.e., T<y≦Z. Since 0≦C≦1 and T<y≦Z, 0≦f(y)<1.
And finally, by OPdex definition, the application has 0 value when it exceeds the hard limit Z, which results in the following:
0(r>Z)=∫Z∞0×μ(1−ρ)e−μ(1−ρ)ydy=0. Therefore, by putting these three sets of expected values together, the OPdex can be modeled as:
is the average response time of a system with exponential distribution for service and inter-arrival times.
Thus, the above OPdex model expression establishes the relationships between the OPdex value and mean response time, application throughput, and system utilization. It also provides a mechanism for examining the quantitative influence of soft and hard thresholds on the OPdex value.
As a special case of model for n=1 and C=1, the model becomes:
This model has a value function shown in
As another special case of model for n=0, i.e., the value function becomes a constant for response times between T and Z, and the following representation for the OPdex results:
More specifically, if C=½, the model becomes:
The soft and hard thresholds for response time depend on user expectations, the nature of the applications, and service level objectives. For instance, when concerning end-user oriented transaction processing applications, it is commonly accepted that 0.1 seconds is about the limit for the user to feel that the system is reacting instantaneously, 1 second is about the limit for the user's flow of thought to stay uninterrupted, even though the user will notice the delay, 10 seconds is about the limit for keeping the users attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish.
In this case, a user or monitoring server 116 can select T=1 second and Z=4 seconds or Z=10 seconds. In order to achieve a desired OPdex score, the system is assumed to be provisioned with enough computing power to produce the mean response time that is compatible with the soft threshold. For example, if T=1 second as the soft threshold beyond which the user's satisfaction starts to drop, it is a mismatch to have a system that produces an average response time of 1 minute, i.e.,
In phase 204, the collector 114 may collect application data, such as response times, from the system under test. Such collection is shown with reference to
In phase 206, the monitoring server 116 may compute the OPdex based on the response times and user input. The monitoring server 116 may store these OPdex values, for example, in monitoring database 118 for future reference, display, or for availability to other software components or tools.
The following three examples illustrate how the above OPdex analytical representations can be used in practice.
In a first example, it is assumed that the application response time soft threshold, T, is 10 seconds. To obtain an OPdex value/score 0.94, the average response time,
That is
3.554404602<
If the average response time is within the above range, then the hard threshold Z can be defined as
Of note, the hard threshold value Z is very sensitive to the required mean response time values and this implies that the Z value beyond certain bounds makes neither practical nor theoretical sense.
Table 3 shows that, for a given OPdex value 0.94, as the mean response time
2
As another example for a lower OPdex value of OPdex=0.85, it can again be assumed the application target time, T, is 10 seconds. To obtain an OPdex value of 0.85, the average response time,
That is,
5.271147887<
If the average response time is within the above range, the hard threshold, Z can be defined as
Accordingly, Table 4 shows that, for a given OPdex value 0.85, as the mean response time
7
Both Tables 3 and 4 also show that, to achieve OPdex values of 0.94 or 0.85, the mean response time has to be smaller than the soft threshold (T=10), regardless of the hard threshold Z value. Therefore, if the mean response time is too large, i.e., greater than the soft threshold T, picking a large hard threshold will not help achieve a desired performance goal.
Comparing the mean response time values,
In this example, it is again assumed that the application response time soft threshold, T, is 10 seconds. To obtain an OPdex value of 0.70, Table 5 shows that as the mean response time
1
Given an OPdex value 0.70 and the response time soft threshold T=10, the table shows the maximum mean response time values
The three examples show that, as the OPdex value lowers from 0.94 to 0.7, the demand for a lower mean response time reduces significantly. Capacity planning tools of the embodiments can thus help choose the proper system that delivers the required mean response time for a predefined OPdex score/value. The above three examples all assume that the hard threshold is four times as large as the soft threshold, Z=4 T for purposes of illustration. Other ratios or relationships may be used in the embodiments.
Various examples of the OPdex and its uses will now be described with reference to
In phase 212, the monitoring server 116 may then calculate respective OPdex scores for these metrics on the same scale, such as 0-1, or on different scales for each metric, such as 0-1, 0-10, 0-100, and the like.
In phase 214, the monitoring client 120 may provide the OPdex scores for the metrics. As shown below in Table 2, the OPdex scores for the metrics may be combined into a single score or provided as a set of scores in a report via monitoring client 120. In addition, the OPdex scores for the various metrics may be ranked according to a severity or significance of their impact on the OPdex score.
Table 2 below shows an exemplary set of OPdex scores for different metrics. As shown below, Table 2 indicates a combined OPdex which is based on respective OPdex values for response time, processor utilization, memory, and link utilization. In the example shown, the OPdex values range on a common scale of 0 to 1 and the combined OPdex is calculated as an average of these OPdex scores. In other embodiments, the combined OPdex score may be calculated in various ways to accommodate any number of sub-OPdex scores, such as a weighted average, a weighted sum, etc., regardless of whether each sub-OPdex score varies on the same range.
Table 2 also illustrates that sub-OPdex scores that are part of a combined OPdex may be ranked according to their respective severity or influence on the combined OPdex. The extent of the severity may be manually or automatically determined based on the history of metric data stored in monitoring database 118, for example, by monitoring server 116. For example, application response time and memory may have a significant impact or severity on the combined OPdex. A user may thus navigate into successive levels of detail to investigate the underlying cause of poor application performance. For example, as shown above, the memory metric has an OPdex score of 0.985, which may indicate a performance issue with this metric. Accordingly, a user or administrator may perform various troubleshooting processes that focus on this metric, such as, querying monitoring database 118.
Of Note, the OPdex representation with a quadratic value function
shown in
In other embodiments, the OPdex can be used to troubleshoot the sensitivity of soft thresholds and hard thresholds under different value functions. For example,
As a special case, when C=½,
In
In
As shown in
As can be seen, for a high OPdex value, i.e., the OPdex value close to 1, any further increase in the OPdex value may require a substantial reduction in the mean response time. For instance,
For example,
The features and attributes of the specific embodiments disclosed above may be combined in different ways to form additional embodiments, all of which fall within the scope of the present disclosure. Although the present disclosure provides certain embodiments and applications, other embodiments that are apparent to those of ordinary skill in the art, including embodiments, which do not provide all of the features and advantages set forth herein, are also within the scope of this disclosure. Accordingly, the scope of the present disclosure is intended to be defined only by reference to the appended claims.
This application claims the benefit of U.S. Provisional Patent Application 61/474,488, entitled “Assessing Application Performance with an Operational Index,” filed Apr. 12, 2011, which is expressly incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5375199 | Harrow et al. | Dec 1994 | A |
6061724 | Ries et al. | May 2000 | A |
6313768 | Allen | Nov 2001 | B1 |
6449739 | Landan | Sep 2002 | B1 |
6529954 | Cookmeyer et al. | Mar 2003 | B1 |
6781959 | Garakani et al. | Aug 2004 | B1 |
6801940 | Moran et al. | Oct 2004 | B1 |
6871227 | Allen | Mar 2005 | B2 |
6959265 | Candela et al. | Oct 2005 | B1 |
6975330 | Charlton et al. | Dec 2005 | B1 |
7010593 | Raymond | Mar 2006 | B2 |
7197559 | Goldstein et al. | Mar 2007 | B2 |
7218928 | Park et al. | May 2007 | B2 |
7290048 | Barnett et al. | Oct 2007 | B1 |
7293287 | Fischman et al. | Nov 2007 | B2 |
7392234 | Shaath et al. | Jun 2008 | B2 |
7467202 | Savchuk | Dec 2008 | B2 |
7509229 | Wen | Mar 2009 | B1 |
7546368 | Drees et al. | Jun 2009 | B2 |
7577689 | Masinter et al. | Aug 2009 | B1 |
7593351 | Zioulas et al. | Sep 2009 | B1 |
7606165 | Qiu et al. | Oct 2009 | B2 |
7730172 | Lewis | Jun 2010 | B1 |
7891000 | Rangamani et al. | Feb 2011 | B1 |
7925729 | Bush et al. | Apr 2011 | B2 |
7954144 | Ebrahimi et al. | May 2011 | B1 |
7979522 | Lunsford | Jul 2011 | B2 |
7984126 | McBride | Jul 2011 | B2 |
20020078195 | Allen | Jun 2002 | A1 |
20020124070 | Pulsipher | Sep 2002 | A1 |
20020133799 | Alpert et al. | Sep 2002 | A1 |
20020198985 | Fraenkel et al. | Dec 2002 | A1 |
20030065986 | Fraenkel et al. | Apr 2003 | A1 |
20030131098 | Huntington et al. | Jul 2003 | A1 |
20030135612 | Huntington et al. | Jul 2003 | A1 |
20030204789 | Peebles et al. | Oct 2003 | A1 |
20040049693 | Douglas | Mar 2004 | A1 |
20040054776 | Klotz et al. | Mar 2004 | A1 |
20040057389 | Klotz et al. | Mar 2004 | A1 |
20040059807 | Klotz et al. | Mar 2004 | A1 |
20050064820 | Park et al. | Mar 2005 | A1 |
20050102402 | Whitehead | May 2005 | A1 |
20050108379 | Gray et al. | May 2005 | A1 |
20050195797 | Kryuchkov et al. | Sep 2005 | A1 |
20060190480 | Ori et al. | Aug 2006 | A1 |
20060190488 | Cohen et al. | Aug 2006 | A1 |
20060274684 | Diener | Dec 2006 | A1 |
20070079243 | Leigh et al. | Apr 2007 | A1 |
20070220489 | Kashiwagi | Sep 2007 | A1 |
20080263112 | Shaath et al. | Oct 2008 | A1 |
20090204704 | Muret et al. | Aug 2009 | A1 |
20110167145 | Bush et al. | Jul 2011 | A1 |
20110213869 | Korunsky et al. | Sep 2011 | A1 |
Entry |
---|
International Search Report and Written Opinion mailed on Jan. 14, 2013 in corresponding International Application No. PCT/US2012/033102. |
Number | Date | Country | |
---|---|---|---|
20130036122 A1 | Feb 2013 | US |
Number | Date | Country | |
---|---|---|---|
61474488 | Apr 2011 | US |