Software modification by group to minimize breakage

Information

  • Patent Grant
  • 8352930
  • Patent Number
    8,352,930
  • Date Filed
    Monday, April 24, 2006
    18 years ago
  • Date Issued
    Tuesday, January 8, 2013
    11 years ago
Abstract
A method is employed to group computers to facilitate application of a software modification to the computers. The method includes identifying a global set of computers to which it is desired to apply the software modification. Based on characteristics of software configurations of the computers of the identified global set, the computers of the identified global set are grouped into a plurality of clusters. Grouping the computers into a plurality of clusters includes processing syntactic information about the computers to identify the plurality of clusters and applying the software modification to the computers of the clusters. The software modification is applied with an adjustment for each cluster in an attempt to avoid software breakage of the computers of that cluster.
Description
CROSS REFERENCE TO RELATED APPLICATIONS

This application is related to the following, all of which are incorporated herein by reference in their entirety:

  • co-pending U.S. patent application Ser. No. 10/651,591, entitled “Method And System For Containment of Networked Application Client Software By Explicit Human Input” and filed on Aug. 29, 2003;
  • U.S. patent application Ser. No. 10/651,588, entitled “Damage Containment By Translation” and filed on Aug. 29, 2003 (issued Dec. 9, 2008 as U.S. Pat. No. 7,464,408 on Dec. 9, 2008);
  • U.S. patent application Ser. No. 10/806,578, entitled “Containment Of Network Communication” and filed on Mar. 22, 2004 (issued Aug. 24, 2010 as U.S. Pat. No. 7,783,735);
  • U.S. patent application Ser. No. 10/935,772, entitled “Solidifying the Executable Software Set of a Computer” and filed on Sep. 7, 2004 (issued Jan. 18, 2011 as U.S. Pat. No. 7,873,955);
  • U.S. patent application Ser. No. 10/739,230, entitled “Method And System For Containment Of Usage Of Language Interfaces” and filed on Dec. 17, 2003 (issued Nov. 23, 2010 as U.S. Pat. No. 7,840,968);
  • U.S. patent application Ser. No. 11/060,683, entitled “Distribution and Installation of Solidified Software on a Computer” and filed on Feb. 16, 2005;
  • U.S. patent application Ser. No. 11/122,872, entitled “Piracy Prevention Using Unique Module Translation” and filed on May 4, 2005 (issued Oct. 13, 2009 as U.S. Pat. No. 7,603,552);
  • U.S. patent application Ser. No. 11/182,320, entitled “Classification of Software on Networked Systems” and filed on Jul. 14, 2005 (issued Dec. 21, 2010 as U.S. Pat. No. 7,856,661);
  • U.S. patent application Ser. No. 11/346,741, entitled “Enforcing Alignment of Approved Changes and Deployed Changes in the Software Change Life-Cycle” by Rahul Roy-Chowdhury, E. John Sebes and Jay Vaishnav, filed on Feb. 2, 2006 (issued Jul. 13, 2010 as U.S. Pat. No. 7,757,269);
  • U.S. patent application Ser. No. 11/277,596, entitled “Execution Environment File Inventory” by Rishi Bhargava and E. John Sebes, filed on Mar. 27, 2006 (issued Feb. 22, 2011 as U.S. Pat. No. 7,895,573); and
  • U.S. patent application Ser. No. 11/400,085, entitled “Program-Based Authorization” by Rishi Bhargava and E. John Sebes, filed on Apr. 7, 2006 (issued Jan. 11, 2011 as U.S. Pat. No. 7,895,573).


BACKGROUND

It is well-known that maintenance of software configurations on computers in an enterprise can be difficult. Many times, when modifications are made to existing software configurations, the existing software ceases to operate properly. Furthermore, different computers may have different operating systems (OS's), OS versions and/or OS patch sets. In addition, different computers may have different software applications, application versions and/or application patch sets.


As a result, when it is desired or necessary to perform what is nominally a global software modification—e.g., to install a new application—to minimize or avoid breakage, it becomes necessary to variously adjust the modification procedure to accommodate breakage caused by the various configurations.


SUMMARY OF THE INVENTION

A method is employed to group computers to facilitate application of a software modification to the computers. The method includes identifying a global set of computers to which it is desired to apply the software modification. Based on characteristics of software configurations of the computers of the identified global set, the computers of the identified global set are grouped into a plurality of clusters.


Grouping the computers into a plurality of clusters includes processing syntactic information about the computers to identify the plurality of clusters and applying the software modification to the computers of the clusters. The software modification is applied with an adjustment for each cluster in an attempt to avoid software breakage of the computers of that cluster.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a flowchart illustrating a broad aspect of a method in which similar host are grouped, such that the same adjustment to a baseline software modification procedure can be applied to all of the computers in each group.



FIG. 2 illustrates a “mini-loop” in which the baseline software modification procedure for a subset of computers in a group is adjusted, until the hosts of the subset can be validated as successfully modified.



FIG. 3 is a flowchart illustrating an example of selecting the “similar host group”.



FIGS. 4
a to 4c graphically illustrate example output at various stages of the FIG. 3 processing.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

As discussed in the Background, due to the disparate configurations of the various computers in an enterprise, software configurations tend to break in different ways when it is attempted to make modifications to those software configurations. Thus, it is desirable to group the computers such that the same adjustment applied to a baseline software modification procedure relative to one computer in a group can be applied to other computers in the same group.



FIG. 1 is a flowchart illustrating a broad aspect of a method in which similar computers (more broadly, “hosts”) are grouped, such that the same adjustment to a baseline software modification procedure can be applied to all of the computers in each group. Referring now to FIG. 1, at step 102, the set of all hosts is determined. That is, it is determined to which hosts the software modification procedure is to be applied. At step 104, a first “similar host group” is selected from the determined set of all hosts. We discuss later some examples of selecting a “similar host group.”


At step 106, a subset of the first “similar host group” is processed. In particular, the hosts of the subset are modified using the baseline software modification procedure. As necessary, the baseline software modification procedure is adjusted, until the hosts of the subset can be validated as successfully modified. Step 106 may be performed in a “mini-loop” such as that shown in FIG. 2, and discussed below.


Continuing on, at step 108, the remainder of the first “similar host group” is modified using the baseline software modification procedure, nominally with the adjustment determined in step 106, but further adjusted as appropriate, so that the remainder of the hosts of the first “similar host group” can be validated. Step 108, also, may be performed in a “mini-loop” such as that shown in FIG. 2. It is not strictly necessary to modify every host in a similar host group, although that would represent the most thorough processing of the hosts. At any time, we can skip the host group that is currently being modified and move on to another group. Optionally, processing of different host groups can be parallelized as well.


At step 110, it is determined if all the hosts of the set (determined in step 102) have been modified (not counting any hosts that were intentionally left unmodified). If not, then, at step 112, a next “similar host group” is selected among the remaining unmodified hosts. For example, the next “similar host group” may be the set of remaining computers that is most similar to the first set, or the largest set of remaining computers that are most similar to each other, or the set that is most dissimilar to the first set. Steps 106, 108 and 110 are repeated for this selected next “similar host group.” If it is determined at step 110 that all the hosts of the set have been modified, processing is complete at step 114.


In the above flow, if at any point we have modified a number of hosts such that either the current grouping is not useful for selecting the next hosts to be modified or a fresh grouping would possibly be more useful, we may optionally restart the grouping on the subset of the original set of hosts that are still unmodified. Furthermore, when a set of hosts is grouped, the grouping may be done multiple times according to different sets of parameters in order to see whether one particular choice of parameters produces a better grouping. As an example, such parameters may include: choosing to include or exclude particular types of semantic info; assigning different weights to different types of semantic or syntactic info; etc.


As a result of the FIG. 1 processing, wherein a software modification procedure is applied to similar groups of hosts, the process of adjusting the baseline software modification procedure, to accommodate differences in the hosts, is eased.


As mentioned above, FIG. 2 illustrates a “mini-loop” in which the baseline software modification procedure is adjusted, until the hosts of a subset of a similar host group can be validated as successfully modified. Referring to FIG. 2, at step 202, the hosts are modified and an attempt is made to validate the modification. At step 204, it is determined if the modification was successfully validated. If it is determined that the modification was not successfully validated, then at step 206, the modification is adjusted (e.g., taking into account the reason the modification was not successfully validated) and the validation is attempted again.


Now, with reference to FIG. 3, we discuss an example of selecting the “similar host group.” In the FIG. 3 example, once the set of all hosts is determined (step 302), a semantic analysis is applied (step 304) to segment hosts, followed by a syntactic analysis (step 312) on a chosen one of the segments (step 310) resulting from the semantic analysis.


With respect to step 302, this step is typically performed manually. Step 302 may also include providing meta-data for each computer that can be used in the segmenting step (step 308, discussed below). The meta-data may include, for example, the function/mission/line of business in which the computer is used or other attributes. The information is typically provided by people. Other meta-data may include an indication of priority/importance—i.e., an indication of which computers, or which applications on which computers, are more important than others.


Other meta-data may include an indication of the “validatability,” which refers to the existence of validation test procedures for the software packages on a computer. For example, a computer that is not very validatable is one for which reliable validation test procedures are not known for its primary software packages and, therefore, cannot be thoroughly tested. By contrast, a very validatable host is one that has software packages for which validation testing has already been done and, therefore, it is known how to thoroughly validate and/or apply breakage workarounds. Intermediately, there are hosts for which there is some confidence that validation testing can be properly performed, but for which not all the validation testing has already been performed.


Generally, the semantic analysis (step 304) clusters the hosts based on properties of the hosts that are known or thought to be significant differentiators with regard to the software modification to be attempted. For example, a significant differentiator may be the operating system and/or software packages installed on each host, including the versions/patch-sets associated with the operating system and/or patch sets.


In general, then, a way is determined to clearly define the state of a computer, and then to define a notion of similarity between such states. One way to do this in a specific implementation is by considering the state of each computer to be a vector of atoms. An atom can represent one or more of, for example, installed OS version/patch; installed SW applications/versions/patch-sets; a computer group, of which the computer is a member (e.g., “accounting” or “web services”); a primary line-of-business of the computer; a flag indicating whether the computer performs a mission-critical task; and a ranking of the “validatability” of the host (an aggregate of “validatability” ranking of applicable atoms).


Other possible attributes of an atom include, for example, a weight to indicate level of importance of that atom; for what the atom indicates, an indication of whether it is of primary or secondary importance to the mission of the host; for what the atom indicates, an indication of variance based on total software inventory; for the software application that the atom indicates, identify validation test suites and/or test procedures (e.g., provided by vendor or developed for customer use during software maintenance cycles); and a ranking of the “validatability” for the software application that the atom indicates.


Referring still to FIG. 3, step 306 (within step 304) is a semantic inspection of the hosts. The information about the operating system and software packages, and the versions/patch-sets (e.g., the atom information) can generally be gathered automatically. The information may be obtainable by inspecting each software package itself or by querying the OS. In some examples, human input can be utilized as well, e.g., humans can indicate the names of the most important/critical software packages. Furthermore, it should be noted that the versions/patch-sets may not have been linearly applied.


Step 308 (within step 304) is a segmentation step. The operation of the segmenting step 308 is based at least on information gathered during the semantic inspection step 306. In particular, the population of hosts is segmented according to at least some of the semantic information. For example, the segmenting may be by operating system or by application or application groups. A segment may also be further segmented—e.g., first by operating system and then by application groups. The intent is to start on a segment that is the best starting point for the modification, and this starting segment can be chosen based on size, degree of similarity with the other segments, etc.


In a simple example, the segmenting step may be performed via a simple selection, based on individual computers in the population having or not having some attribute. For example, an operating system of a computer may be characterized by a micro-version. An OS micro-version is a naming representation of the OS name, version, and applied patch-sets (including service packs and similar upgrades) and optionally representing the install order of the patch-sets. An application micro-version is defined similarly.


Micro-versions can provide a fast and accurate similarity measure for a set of computers, and test-equivalence classes can be realized simply by grouping computers with the same OS micro-versions together, and optionally also sub-grouping by application micro-versions. It is thought that this approach works particularly well for a larger host population having some micro-versions that occur frequently, such as in a centrally managed IT dept where operating systems, applications and patch-sets are applied in the same order to many computers.


Turning again to a more general discussion of the segmenting step 308, a segment may be relatively simplistically determined by using one attribute value (e.g., a specific OS “micro-version”), and every computer having that value of the attribute falls within a particular segment, with all other computers falling outside that particular segment. On the other hand, a segment may be relatively complexly determined by providing many attribute values to a cluster-analysis algorithm—for example with inputs being the OS/SW micro-versions or other attributes, and the outputs being the segments (also known as “clusters” with respect to cluster analysis algorithms). Generally, the segmentation of step 308 does not use information gathered in the syntactic inspection step 314.


The result of the segmenting step 308 is one or more segments, and possibly some outliers, which are a subset of “all hosts” (determined in step 302) and represent similar hosts. The value of the segmentation step is that it can minimize or prevent mis-clustering of hosts (e.g., based on the gathered syntactic information from step 314), where it is known from the semantic information that the hosts are in fact very dissimilar and are advantageously not clustered together. An example of this is two computers that are 90% similar, but where the 10% difference actually represents different operating systems and, hence, the computers are fundamentally dissimilar for the purposes of applying the software modification.


As mentioned above, the syntactic analysis (312) is performed on one segment at a time—the segment chosen in step 310. (In some cases, for example, where no segments were identified, then the syntactic analysis (312) is performed on all of the hosts.) Step 314 is an information gathering step, to obtain syntactic information about the hosts of the segment on which the syntactic analysis 312 is being performed. The syntactic information may be present, for example, in an inventory of information about files (and, more generally, containers) accessible by each host. For example, application Ser. No. 11/277,596 describes a method to maintain an inventory of such information for one or more hosts.


In the inventory-comparison analysis step 316 of the syntactic analysis, an indication of a set of clusters is determined. The clusters are “test-equivalence classes” representing hosts deemed to be syntactically similar, based on the syntactic information about the hosts. There may also be some outliers that were determined to not fit into any of the clusters.


In some examples, the semantic analysis step 304 is not performed, with reliance for clustering being entirely on syntactic information in the syntactic analysis step 312. For example, the set of operating system files (or, as appropriate, other files or characteristics that are determined to affect what adjustments to make to the baseline software modification procedure) can be used as the primary “clustering key” in the inventory comparison analysis step 316 of the syntactic analysis step 312. However, it is difficult in many instances to determine which files are operating system files and which files are “other” files. For example, with the Windows operating system, it can be difficult to distinguish between where the operating system ends and the applications start.


On the other hand, the reader may be considering why the syntactic analysis step 312 is employed at all, after the segmentation output of the semantic analysis step 304. In general, as discussed in the previous paragraph, the information that most affects the adjustments to make to the baseline software modification procedure is the syntactic information. In general, however, the semantic analysis step 304 is helpful, for example, to minimize the occurrence of mis-clustering of computers that are in fact very dissimilar. Finally, at step 318, a set of one or more similar host groups is provided.



FIGS. 4
a, 4b and 4c graphically illustrate example output at various stages of the FIG. 3 processing to determine one or more “similar host groups.” Specifically, FIGS. 4a, 4b and 4c graphically illustrate, respectively, an example output of step 302 (determine all hosts) in FIG. 3, of step 304 (semantic analysis), and of step 312 (syntactic analysis). Referring to FIG. 4a, this figure illustrates a set 402 of all hosts. FIG. 4b illustrates the set 402 of all hosts divided into segments (e.g., including segments 404a, 404b and 404c, as well as other segments). FIG. 4c illustrates the hosts of some segments being clustered into “similar host groups” as a result of the syntactic analysis step. For example, segment 404a includes groups 406a and 406b; and segment 404b includes group 406c.


It is noted, with reference to FIGS. 4a to 4c, that the semantic analysis step 304 may be entirely performed before starting the syntactic analysis step 312 but, in some examples, portions of the semantic analysis step 304 and the syntactic analysis step 312 are interleaved.


In summary, then, by intelligently grouping computers in a manner that is likely to correlate to the manner in which a baseline software modification procedure may be adjusted to avoid breakage, administration of a software modification to computers of an enterprise may be simplified.

Claims
  • 1. A method implemented by executing instructions stored in a memory of a computer, the method comprising: identifying a global set of hosts, comprising computers, to which it is desired to apply a software modification;segmenting the hosts based on a semantic analysis of attributes selected from an attribute group including operating system name, operating system version, operating system patch-sets, and installed software applications, such that hosts in each segment share at least one attribute;selecting a segment to syntactically analyze;grouping the hosts of the selected segment into a plurality of clusters, wherein grouping the hosts into a plurality of clusters includes processing syntactic information about the hosts in the selected segment to identify the plurality of clusters, wherein the syntactic information is obtained from an inventory of information about files accessible by each host, and wherein the syntactic information is different from the attributes;repeating the selecting the segment and the grouping until substantially all hosts in the global set of hosts have been grouped;selecting a cluster of hosts; andapplying the software modification to the selected cluster of hosts using a baseline software modification procedure, wherein the applying comprises: selecting a subset of hosts in the selected cluster of hosts;attempting to modify the hosts in the subset using the baseline software modification procedure;if the attempt is not successful, adjusting the baseline software modification procedure until the modification is successful, wherein the adjusting includes:modifying at least one host of the selected cluster with an adjustment to the baseline software modification procedure, wherein the adjustment is determined by:applying a nominal version of the software modification to the at least one host according to the baseline software modification procedure;testing to determine whether the software modification is successful;adjusting the baseline software modification procedure; andrepeating the testing and adjusting step until it is determined that the software modification is successfully applied on the at least one host;determining whether the modified host operates according to desired characteristics;based on determining that the modified host does not operate according to the desired characteristics, updating the adjustment; andrepeating the modifying, determining and adjustment updating step on the at least one host; andrepeating the selecting, attempting, and adjusting, until all hosts in the selected cluster of hosts have been modified with the software modification.
  • 2. The method of claim 1, further comprising: obtaining the semantic information.
  • 3. The method of claim 2, wherein: obtaining the semantic information includes obtaining the semantic information based on human input.
  • 4. The method of claim 3, wherein: the semantic information based on human input includes characterization of how the hosts are used.
  • 5. The method of claim 4, wherein: the characterization of how the hosts are used includes a characterization of importance of the hosts or applications on the hosts.
  • 6. The method of claim 3, wherein: the semantic information based on human input includes a characterization of validatability.
  • 7. The method of claim 1, wherein: the steps of segmenting the hosts and grouping the hosts into a plurality of clusters are at least partially interleaved.
  • 8. The method of claim 1, wherein: adjusting the baseline software modification procedure further comprises repeating the modifying, determining and adjustment updating step for other hosts of that cluster, other than the at least one host of that cluster.
  • 9. The method of claim 1, wherein: the adjustment is initially a null adjustment.
  • 10. The method of claim 1, wherein: the semantic information about each host includes information about types of software present on that host.
  • 11. The method of claim 10, wherein: the information about types of software present on that host include information about the operating system and applications present on that host.
  • 12. The method of claim 11, wherein: the information about operating system and applications present on that host includes an indication of a version, including patch sets.
  • 13. The method of claim 1, further comprising: repeating the selecting the cluster of hosts and applying the software modification until all hosts in the global set of hosts have been modified with the software modification.
US Referenced Citations (165)
Number Name Date Kind
4688169 Joshi Aug 1987 A
4982430 Frezza et al. Jan 1991 A
5155847 Kirouac et al. Oct 1992 A
5222134 Waite et al. Jun 1993 A
5390314 Swanson Feb 1995 A
5521849 Adelson et al. May 1996 A
5560008 Johnson et al. Sep 1996 A
5699513 Feigen et al. Dec 1997 A
5778349 Okonogi Jul 1998 A
5787427 Benantar et al. Jul 1998 A
5842017 Hookway et al. Nov 1998 A
5907709 Cantey et al. May 1999 A
5974149 Leppek Oct 1999 A
5987610 Franczek et al. Nov 1999 A
5987611 Freund Nov 1999 A
5991881 Conklin et al. Nov 1999 A
6073142 Geiger et al. Jun 2000 A
6192401 Modiri et al. Feb 2001 B1
6192475 Wallace Feb 2001 B1
6256773 Bowman-Amuah Jul 2001 B1
6275938 Bond et al. Aug 2001 B1
6356957 Sanchez, II et al. Mar 2002 B2
6393465 Leeds May 2002 B2
6442686 McArdle et al. Aug 2002 B1
6449040 Fujita Sep 2002 B1
6453468 D'Souza Sep 2002 B1
6460050 Pace et al. Oct 2002 B1
6662219 Nishanov et al. Dec 2003 B1
6748534 Gryaznov et al. Jun 2004 B1
6769008 Kumar et al. Jul 2004 B1
6769115 Oldman Jul 2004 B1
6795966 Lim et al. Sep 2004 B1
6832227 Seki et al. Dec 2004 B2
6834301 Hanchett Dec 2004 B1
6847993 Novaes et al. Jan 2005 B1
6907600 Neiger et al. Jun 2005 B2
6930985 Rathi et al. Aug 2005 B1
6934755 Saulpaugh et al. Aug 2005 B1
6988101 Ham et al. Jan 2006 B2
7010796 Strom et al. Mar 2006 B1
7024548 O'Toole, Jr. Apr 2006 B1
7039949 Cartmell et al. May 2006 B2
7069330 McArdle et al. Jun 2006 B1
7082456 Mani-Meitav et al. Jul 2006 B2
7093239 van der Made Aug 2006 B1
7139916 Billingsley et al. Nov 2006 B2
7152148 Williams et al. Dec 2006 B2
7251655 Kaler et al. Jul 2007 B2
7290266 Gladstone et al. Oct 2007 B2
7346781 Cowle et al. Mar 2008 B2
7350204 Lambert et al. Mar 2008 B2
7353501 Tang et al. Apr 2008 B2
7363022 Whelan et al. Apr 2008 B2
7370360 van der Made May 2008 B2
7441265 Staamann et al. Oct 2008 B2
7464408 Shah et al. Dec 2008 B1
7506155 Stewart et al. Mar 2009 B1
7506170 Finnegan Mar 2009 B2
7546333 Alon et al. Jun 2009 B2
7546594 McGuire et al. Jun 2009 B2
7607170 Chesla Oct 2009 B2
7657599 Smith Feb 2010 B2
7669195 Qumei Feb 2010 B1
7685635 Vega et al. Mar 2010 B2
7698744 Fanton et al. Apr 2010 B2
7703090 Napier et al. Apr 2010 B2
7757269 Roy-Chowdhury et al. Jul 2010 B1
7765538 Zweifel et al. Jul 2010 B2
7809704 Surendran et al. Oct 2010 B2
7823148 Deshpande et al. Oct 2010 B2
7836504 Ray et al. Nov 2010 B2
7908653 Brickell et al. Mar 2011 B2
7937455 Saha et al. May 2011 B2
8015563 Araujo et al. Sep 2011 B2
20020002704 Davis et al. Jan 2002 A1
20020069367 Tindal et al. Jun 2002 A1
20020083175 Afek et al. Jun 2002 A1
20020099671 Crosbie et al. Jul 2002 A1
20030014667 Kolichtchak Jan 2003 A1
20030023736 Abkemeier Jan 2003 A1
20030033510 Dice Feb 2003 A1
20030037094 Douceur et al. Feb 2003 A1
20030073894 Chiang et al. Apr 2003 A1
20030074552 Olkin et al. Apr 2003 A1
20030110280 Hinchliffe et al. Jun 2003 A1
20030120601 Ouye et al. Jun 2003 A1
20030120811 Hanson et al. Jun 2003 A1
20030120935 Teal et al. Jun 2003 A1
20030145232 Poletto et al. Jul 2003 A1
20030163718 Johnson et al. Aug 2003 A1
20030167399 Audebert et al. Sep 2003 A1
20030221190 Deshpande et al. Nov 2003 A1
20040003258 Billingsley et al. Jan 2004 A1
20040015554 Wilson Jan 2004 A1
20040051736 Daniell Mar 2004 A1
20040054928 Hall Mar 2004 A1
20040143749 Tajalli et al. Jul 2004 A1
20040167906 Smith et al. Aug 2004 A1
20040230963 Rothman et al. Nov 2004 A1
20040243678 Smith et al. Dec 2004 A1
20040255161 Cavanaugh Dec 2004 A1
20050018651 Yan et al. Jan 2005 A1
20050086047 Uchimoto et al. Apr 2005 A1
20050097147 Hunt et al. May 2005 A1
20050108516 Balzer et al. May 2005 A1
20050108562 Khazan et al. May 2005 A1
20050114672 Duncan et al. May 2005 A1
20050132346 Tsantilis Jun 2005 A1
20050228990 Kato et al. Oct 2005 A1
20050235360 Pearson Oct 2005 A1
20050257207 Blumfield et al. Nov 2005 A1
20050260996 Groenendaal Nov 2005 A1
20050262558 Usov Nov 2005 A1
20050273858 Zadok et al. Dec 2005 A1
20050283823 Okajo et al. Dec 2005 A1
20050289071 Goin et al. Dec 2005 A1
20050289538 Black-Ziegelbein et al. Dec 2005 A1
20060004875 Baron et al. Jan 2006 A1
20060015501 Sanamrad et al. Jan 2006 A1
20060036591 Gerasoulis et al. Feb 2006 A1
20060037016 Saha et al. Feb 2006 A1
20060080656 Cain et al. Apr 2006 A1
20060085785 Garrett Apr 2006 A1
20060101277 Meenan et al. May 2006 A1
20060133223 Nakamura et al. Jun 2006 A1
20060136910 Brickell et al. Jun 2006 A1
20060136911 Robinson et al. Jun 2006 A1
20060195906 Jin et al. Aug 2006 A1
20060236398 Trakic et al. Oct 2006 A1
20070011746 Malpani et al. Jan 2007 A1
20070039049 Kupferman et al. Feb 2007 A1
20070050764 Traut Mar 2007 A1
20070074199 Schoenberg Mar 2007 A1
20070083522 Nord et al. Apr 2007 A1
20070101435 Konanka et al. May 2007 A1
20070136579 Levy et al. Jun 2007 A1
20070169079 Keller et al. Jul 2007 A1
20070192329 Croft et al. Aug 2007 A1
20070220061 Tirosh et al. Sep 2007 A1
20070220507 Back et al. Sep 2007 A1
20070253430 Minami et al. Nov 2007 A1
20070271561 Winner et al. Nov 2007 A1
20070300215 Bardsley Dec 2007 A1
20080005737 Saha et al. Jan 2008 A1
20080005798 Ross Jan 2008 A1
20080010304 Vempala et al. Jan 2008 A1
20080034416 Kumar et al. Feb 2008 A1
20080052468 Speirs et al. Feb 2008 A1
20080082977 Araujo et al. Apr 2008 A1
20080120499 Zimmer et al. May 2008 A1
20080163207 Reumann et al. Jul 2008 A1
20080163210 Bowman et al. Jul 2008 A1
20080184373 Traut et al. Jul 2008 A1
20080294703 Craft et al. Nov 2008 A1
20080301770 Kinder Dec 2008 A1
20090038017 Durham et al. Feb 2009 A1
20090043993 Ford et al. Feb 2009 A1
20090144300 Chatley et al. Jun 2009 A1
20090150639 Ohata Jun 2009 A1
20090249438 Litvin et al. Oct 2009 A1
20100071035 Budko et al. Mar 2010 A1
20100100970 Chowdhury et al. Apr 2010 A1
20100114825 Siddegowda May 2010 A1
20100281133 Brendel Nov 2010 A1
20110035423 Kobayashi et al. Feb 2011 A1
Foreign Referenced Citations (10)
Number Date Country
1 482 394 Dec 2004 EP
2 037 657 Mar 2009 EP
WO 9844404 Oct 1998 WO
WO 0184285 Nov 2001 WO
WO 2006012197 Feb 2006 WO
WO 2006124832 Nov 2006 WO
WO 2008054997 May 2008 WO
WO 2011059877 May 2011 WO
WO 2012015485 Feb 2012 WO
WO 2012154890 Feb 2012 WO