The present invention relates to platform matching. In particular, the present invention relates to systems and methods to profile applications and benchmark platforms such that the applications may be matched to a most suitable computing platform.
Businesses and other entities are becoming increasingly dependent upon custom or semi-custom software applications to perform a variety of processing. For example, financial services businesses develop and utilize a wide range of applications to perform important activities, such as trade processing, decisioning, settlement, and the like. Each application may have different processing characteristics. For example, one application may be particularly dependent on database operations and may require use of a computing platform that has efficient memory and disk storage operation. Another application may be computation intensive, and require a computing platform that is suited for performing efficient floating point operations. As a result, different applications may perform differently on different hardware or computing platforms.
Advances in computing hardware and software are continuing at a rapid pace. This rapid advancement has provided a wide range of choices in computing platforms, with different operating systems, processors, storage devices, and memory configurations. A business or other entity may run custom or other software applications on a variety of computing platforms. Unfortunately, however, there is no “one size fits all” computing platform. An application requiring efficient floating point operations may not perform as well on a computing platform that is designed for efficient memory and disk storage applications. It is desirable to provide systems and methods to allow a business or other entity to select the computing platform (from among more than one available platforms) that is the best fit for a particular software application. It is desirable to provide systems and methods to match the processing requirements of applications to the performance results of one or more computing platforms to determine the best platform for an application. It is further desirable to monitor applications during operation and automatically generate application resource usage data for use in further matching each application to a most desirable computing platform.
Applicants have recognized a need for an ability to match an application with the best fit computing platform from among more than one available computing platforms. Pursuant to some embodiments, a matching platform is provided to manage and administer this matching.
Some embodiments described herein are associated with platform matching systems and methods. As used herein, the phrase “software application” or “application” may refer to a software program (or related set of programs) coded (in any of a number of programming languages or techniques) to perform a desired set of services, tasks or operations on behalf of one or more users. The phrase “computing platform” may refer to computing hardware (including one or more processors, memory storage devices, input and output devices, etc.) and operating system software (such as Linux®, Unix, Windows®, or the like) packaged or integrated together so that software applications may be installed and operated on the computing platform.
As used herein, the term “services taxonomy” will refer to the description or characterization of each “service” performed by or utilized by an application. Put another way, as used herein, a “taxonomy” defines terminology associated with an application, and provides a coherent description of the components and conceptual structure of the computing platform requirements of the application. Each application has one or more “services”, and each “service” has a service nature or taxonomy. Each service nature or taxonomy describes or categorizes the primary characteristics of a service. For example, a software application may be described as having one or more concrete service types, such as a database service and a data communication service. Each service may have characteristics other than hardware performance that may be considered. For example, characteristics such as HA capabilities, power consumption, failure rates, licensing costs, support costs, operational costs, and the like may also be considered and included in a given services taxonomy.
Further details of how these service descriptions are used pursuant to some embodiments will be described below. Those skilled in the art will appreciate that a number of services taxonomies may be followed or used in conjunction with some embodiments, including the “TOGAF Service Categories” available through the OpenGroup at http://www.opengroup.org. For example, an application taxonomy for a particular application may be shown as follows:
That is, the application taxonomy shown above is used to describe an application that is most heavily dependent upon database management services, and is particularly disk intensive in operation. Pursuant to some embodiments, this type of a description of the services profile of an application is used by the matching platform to analyze different computing platforms to identify the most appropriate platform for use with the application. Those skilled in the art will appreciate that other taxonomies and usage profile breakdowns may be used.
In general, pursuant to some embodiments, a platform matching process and system are provided in which an application is matched with a best (or most cost effective, or most desirable) computing platform by first generating a baseline performance dataset by testing the application using a known or benchmark computing platform. A resource usage profile is generated using a desired services taxonomy, and suitable benchmark unit tests are selected to evaluate each of the services in the taxonomy. In some embodiments, generating an accurate resource usage profile may require runtime analysis of an application's component services with software tools that can capture resource usage metrics from the baseline platform during testing. In some embodiments, for example if resource usage measurement tools are not available, then a resource usage profile can be generated manually based on expert knowledge of application behavior.
Each of the benchmark unit tests are then run on one or more target platforms to arrive at a benchmark result dataset. The matching platform then evaluates the benchmark result dataset by comparing the dataset to the resource usage profile to identify the computing platform that is the “best fit” (or most cost effective, or most desirable) computing platform. The result is a systemized, repeatable and efficient process and system for evaluating a plurality of computing platforms to select a computing platform that will produce the most desirable results when used with a particular software application. In this manner, embodiments allow businesses and other organizations to select the best computing platform (from a set of platforms under evaluation) for use with each software application, resulting in better application performance.
Features of some embodiments will now be described by first referring to
Matching platform 102 might comprise, for example, one or more personal computers, servers or the like. Although a single matching platform 102 is illustrated in
Matching platform 102 is in communication with one or more client devices 106 which may comprise, for example, networked computers in communication with matching platform 102 and used by, for example, end users who use each client device 106 to interact with matching platform 102. For example, client device 106 may be operated by a technician who wishes to evaluate a number of target platforms 104 for use with a software application. The technician may interact with the matching platform 102 through the client device 106. For example, the technician may use the client device 106 to specify a services taxonomy for the software application, and then use the client device 106 to control or establish a benchmark test sequence performed on a benchmark platform. The technician may then use the client device 106 to select a set of benchmark unit tests to be run on each of the target platforms 104, and then use the device to manage or administer the running of each of the benchmark unit tests on each of the target platforms 104. After all of the benchmark unit tests have been performed, the technician may use the client device 106 to run a “best fit” analysis of each of the target platforms to identify the target platform 104 that is the best or most appropriate platform to use with the software application under evaluation. Further details of this process will be provided below.
As shown in
To illustrate features of some embodiments, an example will now be provided. This illustrative example will be referenced throughout the remainder of this description. Those skilled in the art will appreciate that this example is illustrative but not limiting—other specific applications, platforms, and tests may be used with, and are within the scope of, embodiments of the present invention. In the illustrative example, a technician is tasked with the responsibility of evaluating two potential target platforms for use as the “production” or live computing platform to run or operate a financial services software application. The technician is responsible for selecting which of the two target platforms is best suited for use with the software application. The two target platforms are:
In the illustrative example, the software application is referred to as the “ABC application”. The ABC application is a financial services software application that is primarily a database application and that will be used by a number of users on a network. The processing that may occur, pursuant to some embodiments, to match an application (such as the ABC application, in the example) to a most desirable, or most suitable (or “best fit”) computing platform will now be described by first referring to
Processing of
Processing continues at 204 where one or more target platforms are analyzed to create data identifying their performance characteristics. Processing at 204 may be performed separately from the processing at 202 so that a separate data store of platform data may be available for matching operations. Platform data may be created by performing a variety of benchmark unit tests on each platform and storing the results in a data store for access by the platform matching system.
Processing continues at 206 where a matching process (described further below) is performed to compare the application resource usage profile data with performance data from one or more available target platforms to identify which platform(s) are most suited to the application. In this way, applications may be matched to the best (or most desirable) computing platform, resulting in increased performance, cost savings or efficiency. Pursuant to some embodiments, processing at 206 is performed using matching platform 102 of
Further details of the processing to match a target platform to an application will now be described by reference to
According to some embodiments, some or all of the steps of
Pursuant to some embodiments, the user begins the process of
Processing begins at 302, where the user generates a baseline set of data to begin the matching process. In some embodiments, the baseline set of data is obtained by running the software application on a known computing platform, and monitoring the resource usage characteristics of the software application. For example, a test hardware configuration may be used that is in communication with the matching platform. The baseline set of data may, for example, include the capture and storage of resource usage characteristics such as: database usage characteristics, data communication characteristics, system and network management characteristics, and the like. These characteristics may be measured using a set of standard benchmark tests designed to test each of the various characteristics.
Processing continues at 304 where a user interacts with the matching system to identify resource usage profile data and select a set of appropriate benchmark unit tests to evaluate the identified resources. In some embodiments, an objective is to measure the application resource usage of various system resources and express the application behavior profile as a breakdown by percentage of resources utilized. Applicants have recognized that one challenge is to translate the recorded resource metrics (which are often expressed as a consumption rate such as a disk write expressed as a bits per second throughput metric) into a form that can be expressed as a percentage of overall application resource consumption which is composed of disparate types of resources each with their own unit of measurement. Its not enough to know the throughput of a resources; instead, embodiments correlate the throughput to the amount of time the application spends accessing the resource. For example, an application may be doing 50k bps writes, but if the disk write portion of a 10 minute application runtime is only 10 seconds long, then the disk write operation is only a small percentage of the application resource usage profile.
For example, in the illustrative example, the technician may identify that the ABC application has the following baseline resource usage profile, and with the following canonical form weightings:
That is, the technician, based on baseline testing of the application on a known platform, has identified that the ABC application is comprised of services that are heavily dependent upon memory operations (with 50% of the canonical form weightings going toward how a platform performs in memory intensive operations), and is less dependent upon multithreaded and multiprocess services and floating point operation services.
This baseline resource usage profile will be used by the matching platform to analyze different computing platforms to identify the most appropriate platform for use with the ABC application. Those skilled in the art will appreciate that other taxonomies and usage profile breakdowns may be used. Pursuant to some embodiments, processing at 304 includes storing the baseline resource usage profile in a datastore accessible by the matching platform (e.g., such as in datastore 108 in
Those skilled in the art will appreciate that this baseline resource usage profile may be expressed in a number of ways. In one example embodiment, the baseline resource usage profile is expressed quantitatively so that it can fit into a ranking system (which will be described further below). In the example, based on the baseline resource usage profile established by the technician, the ABC and DEF computing platforms will be tested and evaluated and then ranked with the memory intensive service test results receiving a weight of 50%, the integer intensive service test results receiving a weight of 25%, etc.
Processing at 304 further includes the selection of individual benchmark unit tests to test each of the services identified as of importance to analysis of the application. Those skilled in the art will appreciate that a number of benchmark unit test procedures are available to test different resource usage characteristics. In some embodiments, the matching system may access or use data from a datastore (such as datastore 110 of
In one example embodiment, data may be stored in a structured manner, by using, for example, a relational database or as XML files in an XQuery capable database. In this manner, data from individual unit tests and matching processes may easily be stored, accessed, and manipulated by the matching platform.
Continuing the illustrative example introduced above, in the example, processing at 304 may include the technician selecting baseline unit tests that are appropriate for testing a platform's multithreaded/multiprocess, floating point, memory intensive, integer intensive, and networking services. The technician may select one or more unit test procedures to test each characteristic. As a specific illustrative example, the technician has selected the following unit test procedures to perform on each target computing platform:
Those skilled in the art will appreciate that a wide range of different unit test procedures are available, and that selection of a desired unit test procedure to test a particular service is within the skill of the person of ordinary skill in the art. The above unit test procedures are identified for illustrative purposes only.
Once the technician has selected a set of unit test procedures to test each of the target platforms, processing continues at 306, where each of the selected unit test procedures are run on each of the target platforms.
Processing continues at 308 where the result data for each of the unit test procedures for each of the target platforms is captured and stored. For example, the unit test data may be stored in datastore 110 of
Processing at 306 and 308 is repeated until all of the selected unit tests have been performed on each of the target platforms and all of the result data has been stored in a data store for analysis by the matching platform.
For example, continuing the illustrative example introduced above, processing at 306 and 308 includes performing the ACE_QBW, ACE_TCP, MPB, PB and STREAMS unit test procedures on each of the ABC and DEF platforms. Each of the unit tests are repeated for each Y-value, until a complete dataset representing the full test results for each platform is obtained. In the illustrative example, the unit test results are stored as XML files in an XQuery capable database. The XML may be stored or accessed using the following general format:
<benchmark test name>|<system configuration tested>|<y-value>
As a result, at the completion of the processing at 306 and 308, in the illustrative example, a data array or table is constructed which has data identifying each unit test for each tested platform, and at each Y-value (or sub test). Those skilled in the art will recognize that other data storage and formatting techniques may be used so long as the data for each test and each platform are readily accessible.
Processing continues at 310 where the matching platform (such as matching platform 102 of
As a specific illustrative example, the matching platform stores PHP code which, when executed, performs one or more of the steps of process 400. Process 400 begins at 402 where the matching platform accesses the test result data, for each platform, each unit test, and each subtest. As discussed above, in some embodiments, the data is stored as an XML file in an XSD database. In such an embodiment, the processing at 402 includes retrieving the data by an XQuery. Processing at 402 may include creating an object having an array of the result data. For example, the array for the testing done in the illustrative example introduced above may include an array[0]-array[x] having array data including the unit test name (such as “ACE_QBW”), the system configuration tested (such as “ABC”), and a number of Y-values, representing the unit test results for each Y-value or subtest.
In some embodiments, processing continues at 404 where the object (including the XQuery results) is then re-ordered into a tree structure that associates the Y-values to the system configurations tested, grouped within the particular unit benchmark test. This allows the matching system to then numerically sort within the Y-value data to determine a score for each platform within each subtest. In some embodiments, the resulting score within each subtest will be in ascending order, while in other embodiments the score will be in descending order (depending upon the particular benchmark, and whether the benchmark results are to be ranked in ascending, i.e. throughput, or descending order, i.e. latency). For example, again continuing the illustrative example, the result of processing at 404 may be an array that ranks the ABC and DEF platforms by their performance in each subunit test within each unit test. Example result data is shown below in TABLE 1 to facilitate understanding of some embodiments:
As shown, in the STREAMS unit tests, the DEF system performed better than the ABC system in the subunit tests labeled Y1-Y4 (as STREAMS is a throughput test, higher values are better). Those skilled in the art will appreciate that for some benchmark unit tests, a large number of Y-values or subtests may be performed. Processing pursuant to some embodiments involves sorting each of the Y-value results for each unit test so that a ranking may be determined by the matching platform. Similar sorts are performed for each of the unit tests so that a resulting data structure is created that has unit test data with all Y-values and Y-value results for each of the platforms sorted within each Y-value.
Processing continues at 406 where the matching platform operates on the data structures created at 404 to generate platform placement scores within each benchmark unit test. In some embodiments these scores are calculated by having the post-sorted array index values (created in 404) represent the finishing order of a system within a test. For example, in the STREAMS array illustrated above, the ABC system had the top ranking Y-value results for subtests Y1-Y4, and would have an overall platform placement score of “4” (assuming, for simplicity, that there were no other Y-values). Similarly, the DEF system would have an overall platform placement score of “8” because it finished in second index position in each of the four unit subtests. In some embodiments, Y-values for an identical test between two or more platforms may be considered of equal performance if they do not differ by more than a pre-specified margin of error. This optional margin of error value can be specified by the user at a global level (wherein it would apply to all test comparisons), or on a per test basis.
Processing continues at 408 where, in some embodiments, the data structure from 406 is transformed into a results data structure representing the scores for each benchmark unit test, by platform. For example, the results data structure for the illustrative example may generally appear as shown below in TABLE 2:
The results data structure is generated, for example, by taking the post-sorted subunit test results generated in 406, and creating a data structure containing all of the results.
Processing continues at 410 where the platform placement scores are normalized within each benchmark unit test. For example, this may be performed to assign a familiar (or human-readable) placement score, such as “1” for “first place”, “2” for “second place”, etc. A “tie” may be signified by two equal normalized results. These normalized scores may also be stored in a data structure for later analysis and viewing. An illustrative data structure is shown below in TABLE 3:
Processing continues at 412 where the matching platform operates on the data generated above to arrive at a “best fit” or best match platform ranking. For example, the matching platform may multiply the value that represents the percentage of relevance (the weighting) of each service in the taxonomy, to the normalized placement scores (generated at 410). These may be stored in a further data structure. For example, continuing the illustrative example, at the beginning of the matching process the technician determined that the application profile of the ABC application and their relative weightings was as follows:
Pursuant to some embodiments, different systems can be readily compared even a variety of different benchmark tests are performed which have different meanings (e.g., a value of a Floating Point test is expressed in MFlops, and a value of a Network test may be expressed in terms of throughput or latency). The result is a system that allows different computing platforms to be compared in a meaningful way to identify the platform (or platforms) that are best suited for use in conjunction with a particular software application. Further, pursuant to some embodiments, the comparison of the platforms can be performed using an automated, or partially automated, matching platform, allowing rapid and accurate comparisons.
The following illustrates various additional embodiments of the present invention. These do not constitute a definition of all possible embodiments, and those skilled in the art will understand that the present invention is applicable to many other embodiments. Further, although the following embodiments are briefly described for clarity, those skilled in the art will understand how to make any changes, if necessary, to the above-described apparatus and methods to accommodate these and other embodiments and applications.
Pursuant to some embodiments, a matching platform (such as the platform 102 of
For example, referring to
Users and administrators may interact with matching platform 102 of
A second portion of the illustrative user interface includes a region for selecting one or more target platforms to analyze the suitability of a match between the selected application and the selected target platforms. Details of each of the selected target platforms may be shown, and a user with sufficient permissions may be able to upload an XML file (or other suitable file format) having details of a new or updated target platform.
A third portion of the illustrative user interface includes a region for viewing data showing how well each of the selected target platforms fit the selected application. A number of options may be selected or deselected by a user to view various charts or descriptions relating to the match of the selected platforms to the application. In this way, a user operating a client device may efficiently and easily interact with matching platform 102 to view, analyze, update and edit information associated with an application and one or more platforms. The result is an ability to easily select the platform or platforms that are best suited to a given application.
The present invention has been described in terms of several embodiments solely for the purpose of illustration. Persons skilled in the art will recognize from this description that the invention is not limited to the embodiments described, but may be practiced with modifications and alterations limited only by the spirit and scope of the appended claims. For example, while some embodiments have been described in which an application's resource utilization profile is created by a technician who has in-depth knowledge of an application's behavior, other embodiments create a resource utilization profile using a model derived from measurements obtained via performance analysis tools. Further, while an application's resource utilization profile has been described in terms of technical characteristics (such as whether/how much an application is disk intensive, memory intensive, network intensive, etc.), those skilled in the art will now appreciate that an application's profile may be described in terms of other characteristics such as system redundancy features, operational costs, etc.
Further, while the use of XML schemas to represent both an application's resource utilization profile and a platform's description and test results, those skilled in the art will recognize that other schemas may be used.
Number | Name | Date | Kind |
---|---|---|---|
5826244 | Huberman | Oct 1998 | A |
6078906 | Huberman | Jun 2000 | A |
6408282 | Buist | Jun 2002 | B1 |
6480861 | Kanevsky et al. | Nov 2002 | B1 |
6788939 | Truong et al. | Sep 2004 | B2 |
6802062 | Oyamada et al. | Oct 2004 | B1 |
6816905 | Sheets et al. | Nov 2004 | B1 |
6901446 | Chellis et al. | May 2005 | B2 |
6912232 | Duffield et al. | Jun 2005 | B1 |
6947987 | Boland | Sep 2005 | B2 |
7035819 | Gianakouros et al. | Apr 2006 | B1 |
7054943 | Goldszmidt et al. | May 2006 | B1 |
7113924 | Fishbain | Sep 2006 | B2 |
7320088 | Gawali | Jan 2008 | B1 |
7463648 | Eppstein et al. | Dec 2008 | B1 |
7487125 | Littlewood | Feb 2009 | B2 |
7532999 | Archie et al. | May 2009 | B2 |
7539640 | Burns et al. | May 2009 | B2 |
7567929 | Kemp et al. | Jul 2009 | B2 |
7577600 | Zagara et al. | Aug 2009 | B1 |
7580946 | Mansour et al. | Aug 2009 | B2 |
7734533 | Mackey et al. | Jun 2010 | B2 |
7788632 | Kuester et al. | Aug 2010 | B2 |
20020198815 | Greifeld et al. | Dec 2002 | A1 |
20030041007 | Grey et al. | Feb 2003 | A1 |
20030050879 | Rosen et al. | Mar 2003 | A1 |
20030084018 | Chintalapati et al. | May 2003 | A1 |
20030084372 | Mock et al. | May 2003 | A1 |
20030093527 | Rolia | May 2003 | A1 |
20030101124 | Semret et al. | May 2003 | A1 |
20030101245 | Srinivasan et al. | May 2003 | A1 |
20030154112 | Nelman et al. | Aug 2003 | A1 |
20030233386 | Waki et al. | Dec 2003 | A1 |
20040010592 | Carver et al. | Jan 2004 | A1 |
20040044718 | Ferstl et al. | Mar 2004 | A1 |
20040111506 | Kundu et al. | Jun 2004 | A1 |
20040181476 | Smith et al. | Sep 2004 | A1 |
20040186905 | Young et al. | Sep 2004 | A1 |
20040205187 | Sayal et al. | Oct 2004 | A1 |
20040249743 | Virginas et al. | Dec 2004 | A1 |
20050038728 | La Mura | Feb 2005 | A1 |
20050044228 | Birkestrand et al. | Feb 2005 | A1 |
20050050545 | Moakley | Mar 2005 | A1 |
20050165925 | Dan et al. | Jul 2005 | A1 |
20050169202 | Ratasuk et al. | Aug 2005 | A1 |
20050172650 | Hermerding | Aug 2005 | A1 |
20050182838 | Sheets et al. | Aug 2005 | A1 |
20050188075 | Dias et al. | Aug 2005 | A1 |
20050246189 | Monitzer et al. | Nov 2005 | A1 |
20050262232 | Cuervo et al. | Nov 2005 | A1 |
20060047802 | Iszlai et al. | Mar 2006 | A1 |
20060064698 | Miller et al. | Mar 2006 | A1 |
20060069995 | Thompson et al. | Mar 2006 | A1 |
20060114926 | McKinnon et al. | Jun 2006 | A1 |
20060123217 | Burdick et al. | Jun 2006 | A1 |
20060143204 | Fish | Jun 2006 | A1 |
20060149576 | Ernest et al. | Jul 2006 | A1 |
20060167703 | Yakov | Jul 2006 | A1 |
20060190605 | Franz et al. | Aug 2006 | A1 |
20060235733 | Marks | Oct 2006 | A1 |
20070043860 | Pabari | Feb 2007 | A1 |
20070067435 | Landis et al. | Mar 2007 | A1 |
20070100735 | Kemp et al. | May 2007 | A1 |
20070234316 | Bayerlein | Oct 2007 | A1 |
20070250433 | Bhat et al. | Oct 2007 | A1 |
20070260744 | Shenfield | Nov 2007 | A1 |
20080034360 | Bodin et al. | Feb 2008 | A1 |
20080126147 | Ang et al. | May 2008 | A1 |
20080244607 | Rysin et al. | Oct 2008 | A1 |
20090024512 | Reid | Jan 2009 | A1 |
20090119673 | Bubba | May 2009 | A1 |
20090228390 | Burns et al. | Sep 2009 | A1 |
Number | Date | Country | |
---|---|---|---|
20090281770 A1 | Nov 2009 | US |