Software component recommendation based on multiple trace runs

Information

  • Patent Grant
  • 10346292
  • Patent Number
    10,346,292
  • Date Filed
    Thursday, March 27, 2014
    11 years ago
  • Date Issued
    Tuesday, July 9, 2019
    6 years ago
Abstract
Recommendations may be generated while calculating performance metrics from multiple uses of a software component. A tracing service may collect trace data from multiple uses of a software component, where each use may be done on different conditions. The performance metric analysis may identify various factors that may affect the performance of a software component, then present those factors to a user in different delivery mechanisms. In one such mechanism, a recommended set of hardware and software configurations may be generated as part of an operational analysis of a software component.
Description
BACKGROUND

Many computer programming languages have a vast trove of reusable software components, many of which may be open source. These components can range in quality from very poor to excellent, with an equal range of performance characteristics. In many languages, there may be hundreds of thousands or even millions of different components. This poses a difficult issue for a developer: how does one select a component from a vast library?


SUMMARY

Recommendations may be generated while calculating performance metrics from multiple uses of a software component. A tracing service may collect trace data from multiple uses of a software component, where each use may be done on different conditions. The performance metric analysis may identify various factors that may affect the performance of a software component, then present those factors to a user in different delivery mechanisms. In one such mechanism, a recommended set of hardware and software configurations may be generated as part of an operational analysis of a software component.


A recommendation system may identify compatible and incompatible software components, as well as other recommendations, by analyzing a graph of module usage across multiple applications that may use various modules. The graph may identify a module relationship that may be classified as a ‘hard’ relationship defined by being called or incorporated in another module, as well as ‘soft’ relationships that may be identified by being incorporated into an application with another module. The graph may further identify potentially mutually exclusive modules that may be identified when a module is removed and replaced with a second module. The graph may be used to recommend related modules or sets of modules for a give use case, among other uses.


A usage recommendation system may suggest hardware and software configurations as well as other compatible or useful modules based on information provided by a user. While architecting a software application or browsing modules, a user may be presented with modules that may be compatible in terms of their performance on similar hardware platforms or under similar loads, as well as by their compatibility based on relationships in a graph of module relationships that may be gathered from analyzing many different uses of various modules.


This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.





BRIEF DESCRIPTION OF THE DRAWINGS

In the drawings,



FIG. 1 is a diagram illustration of an embodiment showing a system for software component recommendations.



FIG. 2 is a diagram illustration of an embodiment showing a network environment with devices that may generate component recommendations.



FIG. 3 is a diagram illustration of an embodiment showing an example component graph with relationships.



FIG. 4 is a diagram illustration of an embodiment showing recommendations with both relationships and performance data.



FIG. 5 is a flowchart illustration of an embodiment showing a method for parametric analysis of trace data.



FIG. 6 is a diagram illustration of an embodiment showing a data sources for constructing a component graph.



FIG. 7 is a flowchart illustration of an embodiment showing a method for building a component graph from an application repository.



FIG. 8 is a flowchart illustration of an embodiment showing a method for building a component graph from component repository.



FIG. 9 is a flowchart illustration of an embodiment showing a method for building a component graph from tracer data.



FIG. 10 is a flowchart illustration of an embodiment showing a method for identifying mutually exclusive relationships.



FIG. 11 is a flowchart illustration of an embodiment showing a method for generating suggestions.



FIG. 12 is a flowchart illustration of an embodiment showing a method for analyzing existing application prior to suggestion analysis.





DETAILED DESCRIPTION

Module Recommendation System Based on Multiple Trace Runs


A module recommendation system may analyze trace runs from multiple uses of a software component. The analysis may identify factors under which the software component performs well or poorly, and these factors may be used in a recommendation system for software components. The factors may include hardware and software configurations, input parameters, general usage parameters, and other factors.


The factors may be generated by comparing different trace datasets to each other and determining the dominant factors that help define the differences between the trace datasets. The dominant factors may help identify the conditions that may be favorable or unfavorable for the operation of various software components. These conditions may be used in several different manners to recommend software components and conditions for executing software components.


The trace datasets may be any type of data that may be collected while an application executes. In many cases, the trace datasets may be time series sequences of trace data that may include performance and operational information about the application. Such sequences may represent how an application and its various components may perform during execution and under load. In many cases, the trace datasets may include information about the load experienced by the application, and in some cases, load or other information may be inferred from analysis of the trace data or other data.


The factors contributing to a software component's favorable or unfavorable operation may be presented to a user as part of a software component statistics user interface. A software component statistics listing may identify which factors are dominant in the fully utilizing the software component, as well as the factors to avoid when deploying the software component. Such information may be helpful for a developer who may be searching for a software component to perform a certain function.


The factors may be implemented as a predictive model for a component's behavior. Such a predictive model may include the dominant factors that may affect a performance or other metric for the component. In a simple example, a predictive model may estimate a component's response time for handling a request given a set of hardware, software, and usage parameters that the component may experience.


Relationship Graph for Software Component Recommendations


A relationship graph for software components may identify different types of relationships between reusable software components. A ‘hard’ relationship may exist where one component calls or includes a second component, while a ‘soft’ relationship may exist where a developer uses two components in the same application. In some cases, a mutually exclusive relationship may be identified when one component is replaced by another in new revisions of an application.


The relationship graph may be created by many different data sources. In some cases, analyses may be performed from data in a repository containing many different applications or components. By analyzing applications, relationships between commonly used components may be identified. In some cases, an analysis of different versions of applications may identify situations where one component may be removed and another one added, thereby indicating a possible mutually exclusive relationship.


The relationship graph may be gathered in part from analyzing tracer data from multiple applications. The tracer data may include performance and operational data for components used within an application, and both ‘hard’ and ‘soft’ relationships may be identified. In some cases, a relationship graph may be generated from multiple sources, include data from multiple repositories as well as tracer data gathered by tracing multiple applications.


The relationships between modules may be used in many different manners. In one example, a component statistics display may include links to other components for which various relationships are known.


Component Usage Recommendation System with Relationship and Performance Matching


A component usage recommendation system may use both performance matching and component relationships to recommend various components or identify components for replacement. For various components, a set of influencing factors may be identified that increase or decrease a component's effectiveness when executed. Further, relationships between components may be identified through a relationship graph. The influencing factors and relationships may be used in several different scenarios to evaluate components and assist users.


In one use scenarios, an analysis may be performed of an application in its intended execution environment and anticipated execution conditions. The analysis may result in a suitability rating or other metric in some cases. Some systems may identify certain components that may be unsuitable for a specific execution environment or conditions, and may further recommend different components for the application.


In another use scenario, a user may define a set of deployment conditions, including hardware, software, loads, and other parameters. From the given conditions, components may be searched, sorted, ranked, or otherwise recommended that may match the intended deployment conditions.


Throughout this specification and claims, the term “component” is used to define a group of reusable code that may be incorporated into an application. A component may be known as a ‘module’, ‘library’, ‘subroutine’, or some other notion. For the purposes of this specification and claims, these terms are considered synonymous.


The “component” may be code that is arranged in a way that multiple applications may access the code, even though the applications may have no connection with each other. In general, a “component” may be code that is configured to be reused. In some cases, a component may be reused within the scope of a large application, while in other cases, the component may be shared to other application developers who may use the component in disparate and unconnected applications.


Many programming languages and paradigms have a notion of a “component” or library, where the component may have a defined interface through which an application may invoke and use the component. Some paradigms may allow a programmer to incorporate a component in a static manner, such that the component code does not further change after the application is written and deployed. Some paradigms may allow for dynamic libraries, which may be loaded and invoked at runtime or even after execution has begun. The dynamic libraries may be updated and changed after the application may have been distributed, yet the manner of invoking the libraries or components may remain the same.


Components may be distributed in source code, intermediate code, executable code, or in some other form. In some cases, components may be services that may be invoked through an application programming interface.


Throughout this specification and claims, the term “component” may be applied to a single reusable function. Such a function may be distributed as part of a library, module, or other set of code, and may reflect the smallest element of reusable code that may be distributed. A single “component” as referenced in this specification and claims may be an individual application programming interface call or callable subroutine or function, as well as a module, library, or other aggregation of multiple callable functions, application programming interface calls, or other smaller elements.


Throughout this specification and claims, the terms “profiler”, “tracer”, and “instrumentation” are used interchangeably. These terms refer to any mechanism that may collect data when an application is executed. In a classic definition, “instrumentation” may refer to stubs, hooks, or other data collection mechanisms that may be inserted into executable code and thereby change the executable code, whereas “profiler” or “tracer” may classically refer to data collection mechanisms that may not change the executable code. The use of any of these terms and their derivatives may implicate or imply the other. For example, data collection using a “tracer” may be performed using non-contact data collection in the classic sense of a “tracer” as well as data collection using the classic definition of “instrumentation” where the executable code may be changed. Similarly, data collected through “instrumentation” may include data collection using non-contact data collection mechanisms.


Further, data collected through “profiling”, “tracing”, and “instrumentation” may include any type of data that may be collected, including performance related data such as processing times, throughput, performance counters, and the like. The collected data may include function names, parameters passed, memory object names and contents, messages passed, message contents, registry settings, register contents, error flags, interrupts, or any other parameter or other collectable data regarding an application being traced. The collected data may also include cache misses, garbage collection operations, memory allocation calls, page misses, and other parameters.


Throughout this specification and claims, the term “execution environment” may be used to refer to any type of supporting software used to execute an application. An example of an execution environment is an operating system. In some illustrations, an “execution environment” may be shown separately from an operating system. This may be to illustrate a virtual machine, such as a process virtual machine, that provides various support functions for an application. In other embodiments, a virtual machine may be a system virtual machine that may include its own internal operating system and may simulate an entire computer system. Throughout this specification and claims, the term “execution environment” includes operating systems and other systems that may or may not have readily identifiable “virtual machines” or other supporting software.


Throughout this specification and claims, the term “application” is used to refer to any combination of software and hardware products that may perform a desired function. In some cases, an application may be a single software program that operates with a hardware platform. Some applications may use multiple software components, each of which may be written in a different language or may execute within different hardware or software execution environments. In some cases, such applications may be dispersed across multiple devices and may use software and hardware components that may be connected by a network or other communications system.


Throughout this specification, like reference numbers signify the same elements throughout the description of the figures.


In the specification and claims, references to “a processor” include multiple processors. In some cases, a process that may be performed by “a processor” may be actually performed by multiple processors on the same device or on different devices. For the purposes of this specification and claims, any reference to “a processor” shall include multiple processors, which may be on the same device or different devices, unless expressly specified otherwise.


When elements are referred to as being “connected” or “coupled,” the elements can be directly connected or coupled together or one or more intervening elements may also be present. In contrast, when elements are referred to as being “directly connected” or “directly coupled,” there are no intervening elements present.


The subject matter may be embodied as devices, systems, methods, and/or computer program products. Accordingly, some or all of the subject matter may be embodied in hardware and/or in software (including firmware, resident software, micro-code, state machines, gate arrays, etc.) Furthermore, the subject matter may take the form of a computer program product on a computer-usable or computer-readable storage medium having computer-usable or computer-readable program code embodied in the medium for use by or in connection with an instruction execution system. In the context of this document, a computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.


The computer-usable or computer-readable medium may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media.


Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by an instruction execution system. Note that the computer-usable or computer-readable medium could be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, of otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.


When the subject matter is embodied in the general context of computer-executable instructions, the embodiment may comprise program modules, executed by one or more systems, computers, or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically, the functionality of the program modules may be combined or distributed as desired in various embodiments.



FIG. 1 is an illustration of an embodiment 100 showing a system for providing recommendations for software components. The recommendations may use performance data and/or a component relationship graph to identify and suggest different components that may meet an anticipated usage and architecture for an application.


The recommendation system may use data that may be collected from multiple instances of a software component. The software component may be a module, library, subroutine, or other component that may be used in many instances of the same application or in many different applications. When data from multiple instances are analyzed, those factors that cause the component to behave in certain ways may be identified. The causal factors may be very useful to developers when selecting components that may operate optimally in the anticipated deployment conditions.


In the example of embodiment 100, a software component may be called within three different applications and executed under three different conditions. Different hardware components may be used, as well as different software platforms. The software platforms may include operating systems, execution environments, drivers, other applications, or any other software variable.


Tracer data may be collected when executing the application under the various conditions. A comparative analysis of the different tracer datasets may reveal which environmental and deployment factors are the dominant factors in affecting performance or other desired metrics. In a simple example, such an analysis may reveal that a certain component may operate very effectively when deployed on a single processor, but performance suffers when deployed on many processors. In another example, a component may be determined to operate optimally under certain types of loads but not under other loads.


A graph of relationships between components may be generated from tracer data as well as other sources. The graph may identify components that have express, implied, mutually exclusive, and other types of relationships.


An express relationship may be identified when one component calls another component. In such a situation, the first component includes the second component. While the second component may be used separately from the first, the first cannot be used without the second.


An implied relationship may be identified when two components may be used in the same application but without calling each other. Such a situation may occur when an application developer selects both components and uses both in the same application. An implied relationship may indicate that two components are complementary to each other. When making recommendations to a developer, an implied relationship may help identify components that the developer may be likely to consider when building an application.


A mutually exclusive relationship may indicate that one component may replace another. Such components may rarely be used in the same application, and may be identified when an application developer removes one component and replaces the component with another component. Such a situation may be observed by analyzing different versions of an application, tracking when a component is removed and when another component is added. While such an analysis may be not be conclusive that a mutually exclusive relationship exists, such an analysis may be one indicator that such a relationship may be present.


A mutually exclusive relationship between components may be useful to recommend components that may be candidates to replace a current set of components in an application. A recommendation system may use a mutually exclusive relationship to suggest changes to an application. When coupled with performance data analyses, such a recommendation may have performance or other data to support such a change.


The devices 102, 104, and 106 illustrate three different deployments of a software component. The devices may operate on three different hardware platforms 108, 110, and 112 and may have three different software platforms 114, 116, and 118, respectively. The hardware platforms may have different processor speed, number of processors, memory, storage, network interface, peripherals, or other parameters. Similarly, the software platforms may have different operating systems, execution environments, drivers, applications, or other software variations.


The applications 120, 122, and 124 may be different applications in some cases. The applications may be different versions of the same application, or completely different applications that may have different architectures, input streams, and different functions.


The components 126, 128, and 130 may be the component of interest in the example of embodiment 100, meaning that the components 126, 128, and 130 may be analyzed to determine the differentiating factors that affect the performance or other output of the component.


The different applications may use the components 126, 128, and 130 in different manners. Some of the applications may exercise some functions of the component while other applications may exercise other functions. Each of the various applications may have different input streams that may be processed by the components, and may exercise the components under different loads.


Additional components 132, 134, and 136 may be present in the applications 120, 122, and 126, respectively. The additional components may be selected by a developer to perform additional functions within the various applications, and the presence of these additional components may be used to establish relationships between various components.


Each of the components 126, 128, and 130 may have a tracer 138, 140, and 142, respectively, that may gather performance and other data, then transmit tracer data to an intake engine 144. In the example of embodiment 100, the tracers 138, 140, and 142 are shown as connecting to the components 126, 128, and 130, respectively. Such an illustration may show that the tracer may monitor only the component to which it is attached. Other embodiments may have a tracer that may gather trace data for an entire application or for multiple components within an application.


The intake engine 144 may receive tracer data from various devices. The tracer data may be stored in a tracer database 146, which may store tracer data from many different applications and software components. An analysis engine 148 may process the trace datasets to determine which of the many factors are dominant to affect the performance or other metric for a given component or application.


The trace data received by the intake engine 144 may also be processed by a graph engine 152 to create a component relationship graph 154. The component relationship graph 154 may contain express and implied relationships between various components. Such relationships may be generated from trace data as well as from other sources, such as various repositories.


A query engine 150 may receive requests containing input parameters 156 and return results 158. In one example of a query, a request may contain input parameters 156 that may define an anticipated execution scenario for an application, including usage and architecture information. These parameters may be used by the query engine 150 to generate a list of software components with performance data as options for a developer to consider.



FIG. 2 is a diagram of an embodiment 200 showing components that may collect data when an application executes and analyzes the data to identify recommendations or other uses. The example of embodiment 200 may illustrate one architecture where tracer data may be collected from multiple devices, then analyzed in a tracer database. A component graph may be generated from relationships identified from the tracer data or other sources.


The diagram of FIG. 2 illustrates functional components of a system. In some cases, the component may be a hardware component, a software component, or a combination of hardware and software. Some of the components may be application level software, while other components may be execution environment level components. In some cases, the connection of one component to another may be a close connection where two or more components are operating on a single hardware platform. In other cases, the connections may be made over network connections spanning long distances. Each embodiment may use different hardware, software, and interconnection architectures to achieve the functions described.


Embodiment 200 illustrates a device 202 that may have a hardware platform 204 and various software components. The device 202 as illustrated represents a conventional computing device, although other embodiments may have different configurations, architectures, or components.


In many embodiments, the device 202 may be a server computer. In some embodiments, the device 202 may still also be a desktop computer, laptop computer, netbook computer, tablet or slate computer, wireless handset, cellular telephone, game console or any other type of computing device. In some embodiments, the device 202 may be implemented on a cluster of computing devices, which may be a group of physical or virtual machines.


The hardware platform 204 may include a processor 208, random access memory 210, and nonvolatile storage 212. The hardware platform 204 may also include a user interface 214 and network interface 216.


The random access memory 210 may be storage that contains data objects and executable code that can be quickly accessed by the processors 208. In many embodiments, the random access memory 210 may have a high-speed bus connecting the memory 210 to the processors 208.


The nonvolatile storage 212 may be storage that persists after the device 202 is shut down. The nonvolatile storage 212 may be any type of storage device, including hard disk, solid state memory devices, magnetic tape, optical storage, or other type of storage. The nonvolatile storage 212 may be read only or read/write capable. In some embodiments, the nonvolatile storage 212 may be cloud based, network storage, or other storage that may be accessed over a network connection.


The user interface 214 may be any type of hardware capable of displaying output and receiving input from a user. In many cases, the output display may be a graphical display monitor, although output devices may include lights and other visual output, audio output, kinetic actuator output, as well as other output devices. Conventional input devices may include keyboards and pointing devices such as a mouse, stylus, trackball, or other pointing device. Other input devices may include various sensors, including biometric input devices, audio and video input devices, and other sensors.


The network interface 216 may be any type of connection to another computer. In many embodiments, the network interface 216 may be a wired Ethernet connection. Other embodiments may include wired or wireless connections over various communication protocols.


The software components 206 may include an operating system 218 on which various software components and services may operate. An intake engine 220 may receive tracer data from tracers on other devices and may store the tracer data into a tracer database 222. An analysis engine 224 may identify various differentiating factors that may affect the performance of the software components that were traced.


A graph engine 226 may identify relationships between software components and build a component graph 228. The graph engine 226 may use trace data, as well as data from other sources, including component repositories, application repositories, and other sources to identify relationships between the components.


A query engine 230 may respond to requests that may use either or both of the tracer database 222 and the component graph 228 to generate results to a query.


A network 232 may connect the various devices that may interact in embodiment 200.


Deployment systems 234 may execute applications and gather tracer data while the applications execute. In many cases, the deployment systems 234 may be production systems on which an application may execute. The deployment systems 234 may operate on various hardware platforms 236, which may be similar to those described for the hardware platform 204.


An operating system 238 or execution environment 240 may execute an application 242. The application 242 may contain various software components 244, and various other applications 246 may also execute on the deployment systems 234. A tracer 248 may operate within the execution environment 240. In some cases, a tracer 250 may execute within the operating system 238.


A development system 252 may illustrate a device on which a developer may create and edit an application's source code 258. The development system 252 may operate on a hardware platform 254, which may be similar to those described for the hardware platform 204.


An integrated development environment 256 may be an application or suite of applications that includes various tools used by a developer, such as an editor 260 and a compiler 262.


An analyzer 264 may analyze the application's source code 258 to generate a query for the query engine 230. The query may define characteristics of an application under development, and the query engine 230 may return information that may be displayed in a recommendation window 266. Such information may include performance data for components in the source code 258, as well as alternate components that may be considered for the application under development.


A repository system 268 may be a system that contains repositories 272 for source code 274. The repositories 272 may contain application code, component code, or other software. The repository system 268 may execute on a hardware platform 254, which may be similar to those described for the hardware platform 204.


The repositories may be analyzed by a graph engine 226 to build the component graph 228. The repositories may indicate implied relationships where two components may frequently be used together, express relationships where one component calls another, and mutually exclusive relationships where components may be exchanged for each other.


A client device 276 may be one mechanism for displaying query results from the query engine 230. The client device 276 may have a hardware platform 278, which may be similar to those described for hardware platform 204. A browser 280 may execute on the client device 276 and display a user interface 282. The user interface 282 may be a web page or other interface through which some of the query results may be displayed.



FIG. 3 is a diagram illustration of an example embodiment 300 showing a software component relationship graph. Embodiment 300 is a simple graph that illustrates components 302, 304, 306, and 308 along with various relationships.


The components may represent reusable software components that may be deployed on various applications. The components may have been discovered through tracer data, source code analysis, repository analysis, or other mechanisms, examples of which may be found later in this specification.


Components 302 and 304 are illustrated with an express relationship 310. The express relationship 310 may be directional, indicating that component 302 may be included or called from component 304. Such a relationship may be a hardcoded relationship, where the source code of component 304 may have called component 302.


Components 306 and 308 are illustrated with a mutually exclusive relationship 314. In a mutually exclusive relationship, two components may often be used in place of each other and rarely used together. Such relationships may be identified by analyzing changes made to an application over many versions. When one component is removed and another component added, such a situation may indicate a mutually exclusive relationship.


Components 302, 304, 306, and 308 may be joined by implied relationships 312. Implied relationships may be identified when two components may be used in the same application. Such relationships may indicate that two components are compatible with and complementary to each other.


A graph such as embodiment 300 may be used to recommend components. For example, an application may contain component 306, which may have implied relationships to components 302 and 304. During an analysis, components 302 and 304 may be recommended to a developer, as components 302 and 304 are commonly used with component 306. Additionally, component 308 may be recommended as a replacement to component 306 due to the mutually exclusive relationship.



FIG. 4 is a diagram illustration of an embodiment 400 showing recommendations with both relationship and performance data.


Embodiment 400 illustrates a request 402 that may include design parameters 404 and possible components 406 for an application. The request 402 may be processed by a query engine 408 to generate some results 414. In another scenario, a request 416 may be generated from an existing application and trace data.


In both types of requests, the query engine 408 may receive information regarding the operational characteristics and deployment architecture. The operational characteristics may be a description of how a component may be used. Such a description may include the load, frequency of requests, input parameters, and other descriptions of intended use. The deployment architecture may define the hardware and software platforms on which the component may execute. Such descriptors may include processor speed, number of processors, memory, storage capacity, storage and network bandwidth, throughput and latency, and other parameters.


The possible components 406 may be a preliminary architecture for an application. Such information may be a starting point for traversing a component graph and providing architecture recommendations. In one use case, the possible components 406 may be components that may represent intended functionality of an application. In such a use case, the results 414 may be a set of components that may match the deployment architecture and intended operational characteristics. Such a use case may be helpful to identify software components at the beginning of a project that may be optimally suited for an intended deployment.


An existing application request 418 may analyze an application that may be in some state of deployment. In some cases, the application may be in development and executing on test or development hardware, while in other cases, the application may have been deployed on production hardware and executed under production loads. Such an application 418 may include several components 420.


A set of trace data 422 may be included in the request 416. The trace data 422 may be analyzed by the analyzer 424 to extract actual operational characteristics and deployment architecture information. Such an analysis may be useful when the trace data 422 may be gathered in a production environment. In cases where the trace data 422 may not accurately reflect an intended production environment and usage, a user may manually select such parameters.


The query engine 408 may analyze a request to generate results 414 that may include a list of suggested components and various performance metrics for the components. A component graph 412 may be queried to identify comparable or related components to those identified in a request. The list of components may be analyzed against a trace database 410 to determine performance and other parameters. Once the performance is known, the components may be ranked or sorted. Recommendations may be made by comparing a baseline set of components in the request to other components that may be identified from the component graph 412.



FIG. 5 is a flowchart illustration of an embodiment 500 showing a method for parametric analysis of trace data. Embodiment 500 illustrates a simplified method for extracting differentiating factors from multiple trace datasets.


Other embodiments may use different sequencing, additional or fewer steps, and different nomenclature or terminology to accomplish similar functions. In some embodiments, various operations or set of operations may be performed in parallel with other operations, either in a synchronous or asynchronous manner. The steps selected here were chosen to illustrate some principles of operations in a simplified form.


Trace datasets may be received in block 502. An element to be analyzed may be identified in block 504. The element to be analyzed may be a software component, for example.


All datasets containing the element may be retrieved in block 506. For each dataset in block 508, a vector may be created containing any available metadata elements in block 510. After creating vectors for each dataset, multivariate analysis may be performed in block 512 to determine the differentiating factors, which may be stored in block 514.


The differentiating factors may be those factors having the largest effect on performance or other metric. These factors may indicate conditions under which a given component may operate well and which conditions the same component may operate poorly. Such factors may be useful when comparing similar components. For example, when suggesting or recommending components for a given set of execution conditions, a sort may be performed on the differentiating factors to identify components that may operate well under the selected conditions.


The differentiating factors may be useful to developers who may be responsible for a selected component. The factors may indicate performance issues under certain conditions and give the developer some direction for improving a component.



FIG. 6 is a diagram illustration of an example embodiment 600 showing mechanisms for generating a component graph. Embodiment 600 may illustrate three different sources for data from which components and relationships between the components may be identified.


An application repository 604, component repository 606, and tracer database 608 may each have data from which components and their relationships may be extracted. In some cases, certain types of relationships may be found from one source while other sources may have other types of relationships.


An application repository 604 may contain application source code 614. The application source code may contain multiple versions of the application. In each version, different sets of components 616, 618, and 620 may be present. The presence of multiple components in a single application may indicate an implied relationship between the components. Additionally, a component that may be removed and replaced by a second component in a subsequent version of an application may indicate a mutually exclusive relationship.


A component repository 606 may contain component source code 622. In some cases, a component source code 622 may contain calls to other components 624 and 626. Such calls may indicate an express relationship between the components, as the first component may include or call the other components in a hard coded manner.


A tracer database 608 may include tracer data 628 that may be collected by monitoring applications. In many cases, the trace data may be collected from monitoring many different applications 630, many of which may include reusable software components 632 and 634. Implied and express relationships may sometimes be inferred from trace data, depending on how detailed the trace data may be. In cases where different versions of an application may be traced, mutually exclusive relationships may be inferred.


A graph engine 610 may take data from any of the various sources, such as the application repository 604, component repository 606, and trace database 608 to create the component graph 612. Examples of such processes may be found later in this specification.



FIG. 7 is a flowchart illustration of an embodiment 700 showing a method for building a component graph. Embodiment 700 may illustrate an example method performed by a graph engine when accessing an application repository to identify reusable software components and implied relationships between the components.


Other embodiments may use different sequencing, additional or fewer steps, and different nomenclature or terminology to accomplish similar functions. In some embodiments, various operations or set of operations may be performed in parallel with other operations, either in a synchronous or asynchronous manner. The steps selected here were chosen to illustrate some principles of operations in a simplified form.


An application repository may be accessed in block 702. The repository may be a conventional source code repository with multiple versions of an application.


The applications to analyze may be identified in block 704. Each application may be analyzed in block 706. In many cases, the analysis of the application may be performed by static examination of source code. In other cases, the analysis may be performed by examining intermediate code, call traces, or other information.


If the application does not call multiple components in block 708, the process may return to block 706.


If the application does call multiple components in block 708, each component may be analyzed in block 710. If the component is not in a component graph in block 712, the component may be added in block 714.


After adding any new components in block 710, the components may be analyzed in block 716. For each component in block 716, each of the remaining components may be analyzed in block 718 and an implied relationship may be created in block 720. In some instances, an implied relationship may be a directional relationship, where the strength or type of relationship may be different from a first component to a second component than from the reverse direction.


After analyzing each application in block 706, the component graph may be stored in block 722.



FIG. 8 is a flowchart illustration of an embodiment 800 showing a method for building a component graph using data from a component repository. Embodiment 800 may illustrate an example method performed by a graph engine when accessing a component repository to identify reusable software components and express relationships between the components.


Other embodiments may use different sequencing, additional or fewer steps, and different nomenclature or terminology to accomplish similar functions. In some embodiments, various operations or set of operations may be performed in parallel with other operations, either in a synchronous or asynchronous manner. The steps selected here were chosen to illustrate some principles of operations in a simplified form.


A component repository may be accessed in block 802. The repository may be a directory of various components, and may contain metadata, source code, or other information about reusable software components. In some cases, the component repository may serve as a directory to search for components, and the component source code may be located in a different repository.


The components may be analyzed in block 804. For each component, if the component is not in the graph in block 806, the component may be added to the graph in block 808.


The component may be analyzed in block 810 to determine if any components are called from the current component. Each of the called components may be processed in block 812. If the called component is not in the graph in block 814, it may be added in block 816.


An express relationship may be created in block 818.


After processing all of the called components in block 812, the process may return to block 804. After processing all of the components in block 804, the component graph may be stored in block 820.



FIG. 9 is a flowchart illustration of an embodiment 900 showing a method for building a component graph from trace data. Embodiment 900 may illustrate an example method performed by a graph engine when accessing a trace database to identify reusable software components and implied relationships between the components. Embodiment 900 illustrates the analysis of a single trace dataset. For a large database, embodiment 900 may be applied to each dataset in the database.


Other embodiments may use different sequencing, additional or fewer steps, and different nomenclature or terminology to accomplish similar functions. In some embodiments, various operations or set of operations may be performed in parallel with other operations, either in a synchronous or asynchronous manner. The steps selected here were chosen to illustrate some principles of operations in a simplified form.


A trace dataset may be received in block 902. The trace dataset may represent tracing data collected by monitoring a single application. Each component within the dataset may be identified in block 904 and separately analyzed in block 906.


For each component in block 906, if the component is not the graph in block 908, the component may be added in block 910.


For each component in block 912, all other components in the application may be identified in block 914. Those additional components may be individually processed in block 916.


If the components from blocks 912 and 916 have a predefined express relationship in block 918, the process may return to block 916 without changing the relationship status. In many embodiments, an express relationship may dominate any implied relationships, such that when an express relationship exists, any implied relationship may be discarded.


If the components from block 912 and 916 do not have a predefined implied relationship in block 920, the implied relationship may be created in block 922. The newly created or predefined implied relationship may be strengthened in block 924.


Many embodiments may include a strength factor for implied relationships. A strength factor may be raised when multiple observations of the same relationship are made.



FIG. 10 is a flowchart illustration of an embodiment 1000 showing a method for identifying mutually exclusive relationships. Embodiment 1000 may illustrate an example method performed by a graph engine when accessing an application repository to identify mutually exclusive relationships between the components.


Other embodiments may use different sequencing, additional or fewer steps, and different nomenclature or terminology to accomplish similar functions. In some embodiments, various operations or set of operations may be performed in parallel with other operations, either in a synchronous or asynchronous manner. The steps selected here were chosen to illustrate some principles of operations in a simplified form.


Mutually exclusive relationships may be implied from analyzing different versions of an application. Such relationships may be identified when a developer replaces a component with another component from one version of an application to another. Such relationships may be implied through observations, and such relationships may be significantly strengthened when receiving human input that may confirm that the second component replaced the first.


An application repository may be accessed in block 1002. The applications to analyze may be identified in block 1004. Each application may be processed in block 1006.


For each application in block 1006, each version of the application may be processed in block 1008. For each version of the application in block 1008, a list of components in the version may be generated in block 1010. If there are no changes from the previous version in block 1012, the process may loop back to block 1008.


If changes to the list of components occurred in block 1012, each change may be processed in block 1014.


For each change in block 1014, an analysis may be made in block 1016 to determine if one component was removed and another component added. If such a determination is not true in block 1018, the process may return to block 1014.


If the determination is true in block 1018, an implied mutually exclusive relationship may exist. If such a relationship does not currently exist between components in block 1020, the relationship may be created in block 1022. The newly created or preexisting relationship may be strengthened in block 1024.


After processing all of the changes in block 1014 for each version in block 1008 of each application in block 1006, the component graph may be stored in block 1028.



FIG. 11 is a flowchart illustration of an embodiment 1100 showing a method for generating suggestions. Embodiment 1100 may illustrate an example method performed by a query engine to combine both performance data derived from trace data and a component graph.


Other embodiments may use different sequencing, additional or fewer steps, and different nomenclature or terminology to accomplish similar functions. In some embodiments, various operations or set of operations may be performed in parallel with other operations, either in a synchronous or asynchronous manner. The steps selected here were chosen to illustrate some principles of operations in a simplified form.


A usage description may be received for an application in block 1102. The usage description may include any parameters that may describe how an application may be used. Such a description may include items like the anticipated workload, desired reliability, or other performance metrics.


An architecture description may be received in block 1104. The architecture description may include hardware and software components on which the application may be executed.


In some cases, the usage description and architecture description may be derived from existing trace data of the application. Such a case may be useful when a recommendation may be generated for an application in production. In other cases, the usage description and architecture description may be a description of anticipated conditions under which an application may be executed.


The architecture description may be analyzed in block 1106 to identify reusable software components. The components may be determined by analyzing source code or from a general description of the application. Each component may be analyzed in block 1108.


For each component in block 1108, a set of performance metrics may be determined for the component. The performance metrics may be derived from a tracer database.


In many cases, the performance metrics may be estimated metrics based on the usage and architecture. Such metrics may reflect the anticipated performance given the anticipated usage and architecture.


A search of the component graph may be made in block 1112 to identify related components. For each related component in block 1114, performance metrics for those components may be determined in block 1116.


The group of related components may be sorted by the performance metrics in block 1118. An analysis of the current components verses the related components may be made in block 1120. If there are related components with better performance metrics in block 1122, the other components may be suggested to the user in block 1124. If no better components exist in block 1122, the suggestions may be omitted.


The suggestions may be presented to a user in block 1126.



FIG. 12 is a flowchart illustration of an embodiment 1200 showing a method for constructing a query based on an existing application and trace data. Embodiment 1200 may illustrate an example method performed by an analysis engine that may take existing applications and their trace data to prepare a recommendation query.


Other embodiments may use different sequencing, additional or fewer steps, and different nomenclature or terminology to accomplish similar functions. In some embodiments, various operations or set of operations may be performed in parallel with other operations, either in a synchronous or asynchronous manner. The steps selected here were chosen to illustrate some principles of operations in a simplified form.


An application may be received in block 1202 along with its trace data.


The application may be analyzed to identify all of its reusable components in block 1204. The analysis may be performed from the trace data, application source code, or other source.


The trace data may be analyzed in block 1206 to identify the usage conditions for the application. The usage conditions may be the actual usage conditions observed during tracing.


The usage and architecture information may be presented to a user in block 1208 and any manually made changes to the observations may be gathered in block 1210. The changes may be saved as a query in block 1212. The query may be transmitted in block 1214. In some embodiments, the query may be processed using a method similar to that of embodiment 1100.


The foregoing description of the subject matter has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the subject matter to the precise form disclosed, and other modifications and variations may be possible in light of the above teachings. The embodiment was chosen and described in order to best explain the principles of the invention and its practical application to thereby enable others skilled in the art to best utilize the invention in various embodiments and various modifications as are suited to the particular use contemplated. It is intended that the appended claims be construed to include other alternative embodiments except insofar as limited by the prior art.

Claims
  • 1. A method performed on at least one computer processor, said method comprising: receiving a plurality of trace datasets, each of said trace datasets comprising a time series of performance data gathered while monitoring a first software component;analyzing said plurality of trace datasets to determine a differentiating factor that causes differences between said trace datasets, wherein determining the differentiating factor also includes identifying a set of one or more complementary components that, when executed, either increased or decreased an effectiveness of the first software component when the first software component was being executed; andpresenting said differentiating factor or said set of one or more complementary components to a user.
  • 2. The method of claim 1, said differences comprising performance differences between said trace datasets.
  • 3. The method of claim 2, said differentiating factor comprising hardware differences.
  • 4. The method of claim 3, said differentiating factor further comprising software differences.
  • 5. The method of claim 4 further comprising ranking a plurality of differentiating factors.
  • 6. The method of claim 5, said performance data comprising resource consumption data.
  • 7. The method of claim 6, said resource consumption data comprising at least one of a group composed of: processor resource consumption data; memory resource consumption data; and network resource consumption data.
  • 8. The method of claim 6, said performance data comprising usage data.
  • 9. The method of claim 8, said usage data comprising at least one of a group composed of: function call counts; and input parameters receives.
  • 10. The method of claim 2, said first software component being an application.
  • 11. The method of claim 10, a first trace dataset being gathered while executing said application on a first hardware configuration and a second trace dataset being gathered while executing said application on a second hardware configuration.
  • 12. The method of claim 2, said first software component being a reusable software component.
  • 13. The method of claim 12, a first trace dataset being gathered while executing said reusable software component as part of a first application, and a second trace dataset being gathered while executing said application as part of a second application.
  • 14. A system comprising: a database comprising a plurality of trace datasets, each of said trace datasets being a time series of performance data gathered while monitoring a first software component;at least one processor; andan analysis engine operating on said at least one processor, said analysis engine that: receives a plurality of trace datasets, each of said trace datasets comprising a time series of performance data gathered while monitoring a first software component; andanalyzes said plurality of trace datasets to determine a differentiating factor that causes differences between said trace datasets, wherein determining the differentiating factor also includes identifying a set of one or more complementary components that, when executed, either increased or decreased an effectiveness of the first software component when the first software component was being executed.
  • 15. The system of claim 14 further comprising: an interface that receives a first request and returns said differentiating factor as a response to said first request.
  • 16. The system of claim 15, said interface being an application programming interface.
  • 17. The system of claim 14, said first software component being a reusable software component.
  • 18. The system of claim 17, a first trace dataset being collected while executing a first application using said reusable software component and a second trace dataset being collected while executing a second application using said reusable software component.
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims priority to and benefit of U.S. Patent Application Ser. No. 61/903,755 entitled “Software Component Recommendation Based on Multiple Trace Runs” filed 13 Nov. 2013, U.S. Patent Application Ser. No. 61/903,762 entitled “Relationship Graph for Software Component Recommendations” filed 13 Nov. 2013, and U.S. Patent Application Ser. No. 61/903,768 entitled “Component Usage Recommendation System with Relationship and Performance Matching” filed 13 Nov. 2013, all of which are hereby expressly incorporated by reference for all they disclose and teach.

PCT Information
Filing Document Filing Date Country Kind
PCT/IB2014/060239 3/27/2014 WO 00
Publishing Document Publishing Date Country Kind
WO2015/071777 5/21/2015 WO A
US Referenced Citations (337)
Number Name Date Kind
5293620 Barabash et al. Mar 1994 A
5327568 Maejima et al. Jul 1994 A
5642511 Chow et al. Jun 1997 A
5732277 Kodosky et al. Mar 1998 A
5740440 West Apr 1998 A
5758183 Scales May 1998 A
5778004 Jennion et al. Jul 1998 A
5835085 Eick et al. Nov 1998 A
5852449 Esslinger et al. Dec 1998 A
5946488 Tanguay et al. Aug 1999 A
5999192 Selfridge et al. Dec 1999 A
6003143 Kim et al. Dec 1999 A
6026362 Kim et al. Feb 2000 A
6038395 Chow et al. Mar 2000 A
6108340 Rolfe Aug 2000 A
6202199 Wygodny et al. Mar 2001 B1
6219826 De Pauw et al. Apr 2001 B1
6226787 Serra et al. May 2001 B1
6243857 Logan et al. Jun 2001 B1
6266804 Isman Jul 2001 B1
6282701 Wygodny et al. Aug 2001 B1
6374271 Shimizu Apr 2002 B1
6560773 Alexander, III May 2003 B1
6661431 Stuart Dec 2003 B1
6681384 Bates et al. Jan 2004 B1
6742003 Heckerman et al. May 2004 B2
6748585 Proebsting Jun 2004 B2
6775423 Kulkarni Aug 2004 B2
6792460 Oulu Sep 2004 B2
6792595 Storistenau et al. Sep 2004 B1
6862727 Stevens Mar 2005 B2
6938186 Das et al. Aug 2005 B2
7058928 Wygodny et al. Jun 2006 B2
7093234 Hibbeler et al. Aug 2006 B2
7120901 Ferri et al. Oct 2006 B2
7174536 Kothari et al. Feb 2007 B1
7194664 Fung et al. Mar 2007 B1
7203925 Michael et al. Apr 2007 B1
7219300 Arquie May 2007 B2
7386839 Golender et al. Jun 2008 B1
7468727 Wong Dec 2008 B2
7472378 Bennett Dec 2008 B2
7509343 Washburn Mar 2009 B1
7543281 King et al. Jun 2009 B2
7574675 Linker Aug 2009 B1
7607169 Njemanze et al. Oct 2009 B1
7620947 Krishnaswamy Nov 2009 B2
7624380 Okada Nov 2009 B2
7639256 Yablonski Dec 2009 B1
7650574 Nattinger Jan 2010 B2
7657873 Horton et al. Feb 2010 B2
7681182 Mistry et al. Mar 2010 B1
7788640 Grimaldi Aug 2010 B2
7814453 Stevens et al. Oct 2010 B2
7827539 Wygodny et al. Nov 2010 B1
7853930 Mitchell et al. Dec 2010 B2
7865872 Chamieh et al. Jan 2011 B2
7921410 Symons Apr 2011 B1
8024708 Demetriou Sep 2011 B2
8032866 Golender et al. Oct 2011 B1
8056059 Chockler Nov 2011 B2
8069145 Surtani Nov 2011 B2
8286142 Fjeldstad et al. Oct 2012 B2
8312056 Peng et al. Nov 2012 B1
8312435 Wygodny et al. Nov 2012 B2
8316354 Vanrenen Nov 2012 B2
8359584 Rao et al. Jan 2013 B2
8381178 Martino et al. Feb 2013 B2
8406565 Schildan Mar 2013 B1
8490055 Basak Jul 2013 B2
8495598 Gounares et al. Jul 2013 B2
8516443 Li Aug 2013 B2
8543983 Murthy Sep 2013 B2
8572575 Berlyant et al. Oct 2013 B2
8595743 Gounares et al. Nov 2013 B2
8607018 Gounares et al. Dec 2013 B2
8615766 Gounares et al. Dec 2013 B2
8640100 Neumann et al. Jan 2014 B2
8640104 McEntee Jan 2014 B2
8650538 Gounares et al. Feb 2014 B2
8656134 Gounares et al. Feb 2014 B2
8656135 Gounares et al. Feb 2014 B2
8656359 Savov Feb 2014 B2
8656378 Gounares et al. Feb 2014 B2
8694574 Gounares et al. Feb 2014 B2
8681155 Basak Mar 2014 B2
8700838 Gounares et al. Apr 2014 B2
8707326 Garrett Apr 2014 B2
8713064 Khafizov Apr 2014 B1
8726255 Gounares et al. May 2014 B2
8745591 De Smet et al. Jun 2014 B2
8745594 Iossiphidis Jun 2014 B1
8752021 Li et al. Jun 2014 B2
8752034 Gounares et al. Jun 2014 B2
8756581 Castanos et al. Jun 2014 B2
8793656 Huang Jul 2014 B2
8943441 Patrick Jan 2015 B1
8966452 Gataullin et al. Feb 2015 B2
8990777 Gounares Mar 2015 B2
8997056 Li et al. Mar 2015 B2
9256969 Krajec Feb 2016 B2
9280841 Krajec Mar 2016 B2
9292415 Seto et al. Mar 2016 B2
9298588 Seto et al. Mar 2016 B2
9298589 Gautallin et al. Mar 2016 B2
9311213 Seto et al. Apr 2016 B2
9323863 Krajec Apr 2016 B2
9437024 Krajec Sep 2016 B2
20010034859 Swoboda Oct 2001 A1
20020007297 Clarke Jan 2002 A1
20020073063 Faraj Jun 2002 A1
20020085041 Ishikawa Jul 2002 A1
20020087949 Golender Jul 2002 A1
20020138788 Yenne et al. Sep 2002 A1
20020156724 Levchin et al. Oct 2002 A1
20020157086 Lewis et al. Oct 2002 A1
20020163498 Chang et al. Nov 2002 A1
20020178185 Kuchinsky et al. Nov 2002 A1
20020196229 Chen et al. Dec 2002 A1
20020199172 Bunnell Dec 2002 A1
20030037248 Launchbury et al. Feb 2003 A1
20030061574 Saluja et al. Mar 2003 A1
20030067481 Chedgey et al. Apr 2003 A1
20030088854 Wygodny et al. May 2003 A1
20030106046 Arnold Jun 2003 A1
20030131286 Kaler et al. Jul 2003 A1
20030140280 Kaler et al. Jul 2003 A1
20040012638 Donnelli et al. Jan 2004 A1
20040015929 Lewis et al. Jan 2004 A1
20040073529 Stanfill Apr 2004 A1
20040083425 Dorwart Apr 2004 A1
20040117172 Shibata Jun 2004 A1
20040117768 Chang et al. Jun 2004 A1
20040128093 Cragun et al. Jul 2004 A1
20040154016 Randall Aug 2004 A1
20040181554 Heckerman et al. Sep 2004 A1
20040205302 Cantrill Oct 2004 A1
20050021318 Inoue et al. Jan 2005 A1
20050102636 McKeon May 2005 A1
20050120333 Inoue et al. Jun 2005 A1
20050177820 Mei et al. Aug 2005 A1
20050180330 Shapiro Aug 2005 A1
20050188272 Bodorin et al. Aug 2005 A1
20050204344 Shinomi Sep 2005 A1
20050262470 Gavrilov Nov 2005 A1
20050278208 Schultz Dec 2005 A1
20060015612 Shimazaki et al. Jan 2006 A1
20060015850 Poole Jan 2006 A1
20060075390 McAllister Apr 2006 A1
20060106843 Middlefart et al. May 2006 A1
20060130016 Wagner Jun 2006 A1
20060182133 Choumaru Aug 2006 A1
20060212852 Hwang Sep 2006 A1
20060242627 Wygodny et al. Oct 2006 A1
20060248177 Dostert et al. Nov 2006 A1
20060265397 Bryan et al. Nov 2006 A1
20070022000 Bodart et al. Jan 2007 A1
20070028189 Robbins Feb 2007 A1
20070050174 Dewitt et al. Mar 2007 A1
20070060205 Kim Mar 2007 A1
20070118538 Ahern et al. May 2007 A1
20070118909 Hertzog May 2007 A1
20070140131 Malloy et al. Jun 2007 A1
20070143795 Tran Jun 2007 A1
20070198952 Pittenger Aug 2007 A1
20080049022 Sherb et al. Feb 2008 A1
20080065668 Spence et al. Mar 2008 A1
20080092121 DeRose et al. Apr 2008 A1
20080104225 Zhang May 2008 A1
20080104451 Blanchard et al. May 2008 A1
20080104570 Chedgey et al. May 2008 A1
20080120400 Keller et al. May 2008 A1
20080126003 Goldstein et al. May 2008 A1
20080127108 Ivanov et al. May 2008 A1
20080127109 Simeon May 2008 A1
20080127112 Kettley et al. May 2008 A1
20080140985 Kitamorn et al. Jun 2008 A1
20080155348 Ivanov et al. Jun 2008 A1
20080155349 Ivanov et al. Jun 2008 A1
20080163124 Bonev et al. Jul 2008 A1
20080168472 Wilson Jul 2008 A1
20080256233 Hall Oct 2008 A1
20080256466 Salvador et al. Oct 2008 A1
20080256518 Aoshima et al. Oct 2008 A1
20080271038 Rolia et al. Oct 2008 A1
20080282232 Cong et al. Nov 2008 A1
20080313502 Mcfadden et al. Dec 2008 A1
20090037407 Yang et al. Feb 2009 A1
20090037873 Ahadian et al. Feb 2009 A1
20090049428 Cozmei Feb 2009 A1
20090089765 Guo et al. Apr 2009 A1
20090113399 Tzoref et al. Apr 2009 A1
20090150874 Chung et al. Jun 2009 A1
20090157723 De et al. Jun 2009 A1
20090271729 Killoren Oct 2009 A1
20090276288 Hlavac et al. Nov 2009 A1
20090307630 Kawai et al. Dec 2009 A1
20090313525 Savin et al. Dec 2009 A1
20090319996 Shafl et al. Dec 2009 A1
20100005249 Bates Jan 2010 A1
20100011341 Baierl et al. Jan 2010 A1
20100042944 Robinson et al. Feb 2010 A1
20100070505 Kao et al. Mar 2010 A1
20100077388 Kimura Mar 2010 A1
20100083178 Zui et al. Apr 2010 A1
20100083185 Sakai et al. Apr 2010 A1
20100088665 Langworthy et al. Apr 2010 A1
20100134501 Lowe Jun 2010 A1
20100138431 Bator et al. Jun 2010 A1
20100153786 Matsukawa Jun 2010 A1
20100167256 Blash Jul 2010 A1
20100180245 Rutten Jul 2010 A1
20100223581 Manolescu et al. Sep 2010 A1
20100235771 Gregg, III Sep 2010 A1
20100281468 Pavlyshchik Nov 2010 A1
20100281488 Krishnamurthy et al. Nov 2010 A1
20100295856 Ferreira et al. Nov 2010 A1
20100333039 Denkel Dec 2010 A1
20110004598 Kikuchi Jan 2011 A1
20110066973 Plom et al. Mar 2011 A1
20110072309 Sakai et al. Mar 2011 A1
20110078487 Nielsen et al. Mar 2011 A1
20110126286 Nazarov May 2011 A1
20110153817 Wright et al. Jun 2011 A1
20110154300 Rao et al. Jun 2011 A1
20110191343 Heaton Aug 2011 A1
20110209153 Suzuki et al. Aug 2011 A1
20110249002 Duplessis et al. Oct 2011 A1
20110289485 Mejdrich et al. Nov 2011 A1
20110314343 Hoke et al. Dec 2011 A1
20110314543 Treit et al. Dec 2011 A1
20120023475 Surazski et al. Jan 2012 A1
20120042212 Laurenti Feb 2012 A1
20120042269 Holman Feb 2012 A1
20120047421 Holman Feb 2012 A1
20120079108 Findeisen Mar 2012 A1
20120079456 Kannan Mar 2012 A1
20120102029 Larson et al. Apr 2012 A1
20120117438 Shaffer et al. May 2012 A1
20120137240 Krueger May 2012 A1
20120137273 Meijler et al. May 2012 A1
20120159391 Berry et al. Jun 2012 A1
20120204156 Kettley et al. Aug 2012 A1
20120221314 Bourlatchkov et al. Aug 2012 A1
20120222019 Gounares et al. Aug 2012 A1
20120222043 Gounares et al. Aug 2012 A1
20120227040 Gounares et al. Sep 2012 A1
20120233592 Gounares et al. Sep 2012 A1
20120233601 Gounares et al. Sep 2012 A1
20120260135 Beck et al. Oct 2012 A1
20120290672 Robinson et al. Nov 2012 A1
20120296991 Spivack et al. Nov 2012 A1
20120317371 Gounares et al. Dec 2012 A1
20120317389 Gounares et al. Dec 2012 A1
20120317421 Gounares et al. Dec 2012 A1
20120317557 Garrett et al. Dec 2012 A1
20120317577 Garrett et al. Dec 2012 A1
20120317587 Garrett et al. Dec 2012 A1
20120323827 Lakshmanan et al. Dec 2012 A1
20120324454 Gounares et al. Dec 2012 A1
20120330700 Garg et al. Dec 2012 A1
20130018925 Pegg Jan 2013 A1
20130060372 Lokowandt et al. Mar 2013 A1
20130061212 Krause et al. Mar 2013 A1
20130067445 Gounares et al. Mar 2013 A1
20130073523 Gounares et al. Mar 2013 A1
20130073604 Gounares et al. Mar 2013 A1
20130073829 Gounares et al. Mar 2013 A1
20130073837 Li et al. Mar 2013 A1
20130074049 Gounares et al. Mar 2013 A1
20130074055 Gounares et al. Mar 2013 A1
20130074056 Gounares et al. Mar 2013 A1
20130074057 Gounares et al. Mar 2013 A1
20130074058 Gounares et al. Mar 2013 A1
20130074092 Gounares et al. Mar 2013 A1
20130074093 Gounares et al. Mar 2013 A1
20130080760 Li et al. Mar 2013 A1
20130080761 Garrett et al. Mar 2013 A1
20130081005 Gounares et al. Mar 2013 A1
20130085882 Gounares et al. Apr 2013 A1
20130104107 De et al. Apr 2013 A1
20130117280 Donaldson May 2013 A1
20130117753 Gounares et al. May 2013 A1
20130117759 Gounares et al. May 2013 A1
20130145350 Marinescu Jun 2013 A1
20130159198 Cartan Jun 2013 A1
20130187941 Noon Jul 2013 A1
20130212479 Willis Aug 2013 A1
20130219057 Li et al. Aug 2013 A1
20130219372 Li et al. Aug 2013 A1
20130227529 Li et al. Aug 2013 A1
20130227536 Li et al. Aug 2013 A1
20130232433 Krajec Sep 2013 A1
20130232452 Krajec Sep 2013 A1
20130235040 Jackson, Jr. Sep 2013 A1
20130271480 Daynes Oct 2013 A1
20130282545 Gounares et al. Oct 2013 A1
20130283102 Krajec et al. Oct 2013 A1
20130283240 Krajec et al. Oct 2013 A1
20130283241 Krajec et al. Oct 2013 A1
20130283242 Gounares et al. Oct 2013 A1
20130283246 Krajec et al. Oct 2013 A1
20130283247 Krajec et al. Oct 2013 A1
20130283281 Krajec et al. Oct 2013 A1
20130291113 Dewey Oct 2013 A1
20130298112 Gounares et al. Nov 2013 A1
20140013306 Gounares et al. Jan 2014 A1
20140013308 Gounares et al. Jan 2014 A1
20140013311 Garrett et al. Jan 2014 A1
20140019598 Krajec et al. Jan 2014 A1
20140019756 Krajec et al. Jan 2014 A1
20140019879 Krajec Jan 2014 A1
20140019985 Krajec et al. Jan 2014 A1
20140025572 Krajec et al. Jan 2014 A1
20140026142 Gounares et al. Jan 2014 A1
20140040591 Gounares et al. Feb 2014 A1
20140053143 Conrod et al. Feb 2014 A1
20140136233 Atkinson et al. May 2014 A1
20140189650 Gounares Jul 2014 A1
20140189651 Gounares Jul 2014 A1
20140189652 Gounares Jul 2014 A1
20140215444 Voccio et al. Jul 2014 A1
20140278539 Edwards Sep 2014 A1
20140317454 Gataullin et al. Oct 2014 A1
20140317603 Gataullin et al. Oct 2014 A1
20140317604 Gataullin et al. Oct 2014 A1
20140317606 Gataullin et al. Oct 2014 A1
20140359126 Breternitz Dec 2014 A1
20140365544 Moffitt Dec 2014 A1
20140365545 Moffitt Dec 2014 A1
20150033172 Krajec Jan 2015 A1
20150212928 Gounares Jul 2015 A1
20150347277 Gataullin et al. Dec 2015 A1
20150347283 Gataullin et al. Dec 2015 A1
20150347628 Krajec Dec 2015 A1
20160035115 Krajec Feb 2016 A1
20160196201 Seto et al. Jul 2016 A1
Foreign Referenced Citations (14)
Number Date Country
101616428 Dec 2009 CN
101627388 Jan 2010 CN
102592079 Jul 2012 CN
103154928 Jun 2013 CN
105283851 Jan 2016 CN
610581 Aug 1994 EP
2390790 Nov 2011 EP
0007100 Feb 2000 WO
2010039893 Apr 2010 WO
2011116988 Sep 2011 WO
2011142720 Nov 2011 WO
2011146750 Nov 2011 WO
2012106571 Aug 2012 WO
2014120263 Aug 2014 WO
Non-Patent Literature Citations (84)
Entry
“International Search Report Issued in PCT Application No. PCT/US2014/011798”, dated Jun. 20, 2014, 3 pages.
International Search Authority, “International Search Report and Written Opinion”, Korea Intellectual Property Office, PCT/US2014/011733, dated May 8, 2014, 10062-02.
Aguilera, et al., “Performance Debugging for Distributed Systems of Black Boxes”, ACM, 2003, pp. 74-89.
Hsu, et al., “Visibility Enhancement for Silicon Debug”, ACM, 2006, pp. 13-18.
Ungar, et al., “Self”, ACM, 2007, pp. 1-50.
Kaya, et al., “Error Pattern Analysis of Augmented Array Codes Using a Visual Debugging Tool”, IEEE, 2006, pp. 1-6.
LabVIEW, “Debugging Techniques”, Jun. 2011, 7 pages. Available at <<http://zone.ni.com/reference/en-XX/help/371361H-01/1vconcepts/debug_techniques/>>.
Kumar, et al., “Visualization of Clustered Directed Acyclic Graphs with Node Interleaving”, ACM, pp. 1800-1805, Mar. 2009.
Natour, “On the Control Dependence in the Program Dependence Graph”, ACM, pp. 510-519, 1988. (The month of Publication is irrelevant since the year of Publication is clearly prior to the filing of the Application).
Ioannidis et al., “Transitive Closure Algorithms Based on Graph Traversal”, ACM Transactions on Database sSystems, vol. 18, No. 3, pp. 512-579, Sep. 1993.
Fu, et al., “De-anonymizing Social Graphs via Node Similarity”, ACM, pp. 263-264, Apr. 2014.
Supplementary Search Report Issued in European Patent Application No. 13873476.9, dated Aug. 2, 2016, 10 pages.
Barbosa et al. “Interactive SNMP Traffic Analysis Through Information Visualization” In Proceedings of the IEEE Network Operations and Management Symposium (NOMS), Apr. 19, 2010, pp. 73-79.
Dobrev et al. “Visualization of Node Interaction Dynamics in Network Traces” In Proceedings of the 3rd International Conference on Autonomous Infrastructure, Management and Security, AIMS 2009, Enschede, Jun. 30, 2009, pp. 147-160.
Joyce et al. “Monitoring Distributed Systems” In Journal of ACM Transactions on Computer Systems (TOCS), vol. 5, Issue 2, May 1, 1987, pp. 121-150.
International Search Report and Written Opinion for PCT/US2013/043492 dated Nov. 6, 2013, 11 pages.
International Search Report and Written Opinion for PCT/US2013/073894 dated Apr. 1, 2014.
International Search Report and Written Opinion for PCT/US2013/044193 dated Oct. 29, 2013.
International Search Report and Written Opinion for PCT/US2013/046050 dated Nov. 8, 2013.
International Search Report and Written Opinion for PCT/US2013/046922 dated Dec. 17, 2013.
International Search Report and Written Opinion for PCT/US2013/043522 dated Nov. 6, 2013.
Gephi Tutorial Layouts, Gephi, Jun. 13, 2011.
International Search Report and Written Opinion for PCT/US2013/046664 dated Nov. 20, 2013.
International Search Report and Written Opinion for PCT/US2013/047211 dated Nov. 27, 2013.
International Search Report and Written Opinion for PCT/US2013/046925 dated Nov. 25, 2013.
International Search Report and Written Opinion for PCT/US2013/046918 dated Nov. 25, 2013.
International Search Report and Written Opinion for PCT/US2013/043811 dated Nov. 28, 2013.
“Method and System for Automatically Tracking User Interactions and Providing Tags to the User Interactions” An IP.com Prior Art Database Technical Disclosure, Dec. 4, 2010.
International Search Report and Written Opinion for PCT/US2014/011727 dated May 16, 2014.
Grossbart “Javascript Profiling with the Chrome Developer Tools” Smashing Magazine Website, Jun. 12, 2012.
Cantrill “Instrumenting the Real-Time Web: Node.js in Production” Node Summit 2012 Presentation; Jan. 24-25, 2012.
Whitehead “Java Run-Time Monitoring, Part 2: Postcompilation Instrumentation and Performance Monitoring—Interception, Class Wrapping, and Bytecode Instrumentation” IBM.com Website Aug. 5, 2008.
Kinsey “Under the Hood: The JavaScript SDK—Error Handling” Facebook.com website Nov. 1, 2012.
“Automagically Wrapping JavaScript Callback Functions” tlrobinson.net.blog, Oct. 22, 2008.
International Search Report and Written Opinion for PCT/IB2014/060233 dated Nov. 11, 2014.
Heer et al. “Prefuse” CHI 2005, Conference Proceedings, Conference on Human Factors in Computing Systems; Apr. 2, 2005, pp. 421-430.
European Search Report for EP 13873299 dated Sep. 21, 2016.
U.S. Appl. No. 13/757,598, Jul. 17, 2014, Office Action.
U.S. Appl. No. 13/899,504, Jul. 21, 2014, Office Action.
U.S. Appl. No. 13/757,625, Aug. 13, 2014, Office Action.
U.S. Appl. No. 13/899,507, Sep. 11, 2014, Office Action.
U.S. Appl. No. 13/899,503, Sep. 12, 2014, Office Action.
U.S. Appl. No. 13/757,570, Nov. 14, 2014, Office Action.
U.S. Appl. No. 13/757,625, Jan. 2, 2015, Office Action.
U.S. Appl. No. 13/899,507, Jul. 1, 2015, Office Action.
U.S. Appl. No. 13/757,598, Feb. 13, 2015, Office Action.
U.S. Appl. No. 13/899,503, Mar. 11, 2015, Office Action.
U.S. Appl. No. 13/899,504, Mar. 11, 2015, Office Action.
U.S. Appl. No. 13/757,570, Jul. 29, 2015, Office Action.
U.S. Appl. No. 13/899,503, Nov. 3, 2013, Office Action.
U.S. Appl. No. 13/899,504, Nov. 6, 2015, Office Action.
U.S. Appl. No. 14/666,120, May 24, 2016, Office Action.
U.S. Appl. No. 13/899,504, May 26, 2016, Office Action.
U.S. Appl. No. 13/899,503, Jun. 2, 2016, Office Action.
U.S. Appl. No. 13/949,994, Aug. 26, 2016, Office Action.
U.S. Appl. No. 13/899,503, Oct. 5, 2016, Office Action.
Extended Search Report Issued in European Patent Application No. 14843127.3, dated Apr. 13, 2017, 9 Pages.
First Office Action and Search Report Issued in Chinese Patent Application No. 201380075253.9, dated Apr. 5, 2017, 27 Pages.
U.S. Appl. No. 13/949,994, May 19, 2017, Office Action.
U.S. Appl. No. 13/757,570, May 19, 2017, Office Action.
Notice of Allowance dated Sep. 6, 2017 cited in U.S. Appl. No. 15/068,996 (Copy Attached).
European Search Report for EP 14801342 dated Dec. 6, 2016.
Vetter et al. “Real-Time Performance Monitoring, Adaptive Control, and Interactive Steering of Computational Grids”, International Journal of High Performance Computing Applications, vol. 14, No. 4, 2000, pp. 357-366.
Notice of Allowance dated May 5, 2017 cited in U.S. Appl. No. 14/883,554 (Copy Attached).
“Second Office Action Issued in Chinese Patent Application No. 201480062297.2”, dated Jul. 17, 2018, 19 Pages.
Office Action dated Dec. 30, 2016 cited in U.S. Appl. No. 13/899,504 (Copy Attached).
Notice of Allowance dated Jan. 20, 2017 cited in U.S. Appl. No. 14/666,120 (Copy Attached).
Huang et al. “Force-Transfer: A New Approach to Removing Overlapping Nodes in Graph Layout”, ACM, pp. 1-10, 2003.
Nusayr et al. “Using AOP for Detailed Runtime Monitoring Instrumentation”, ACM, pp. 8-14, 2009.
Reiss, “Visualization Program Execution Using User Abstractions”, ACM, pp. 125-134, 2006.
“Search Report Issued in European Patent Application No. 14863024.7”, dated Aug. 28, 2017, 13 Pages.
Office Action dated Nov. 17, 2016 cited in U.S. Appl. No. 13/757,570 (Copy Attached).
First Office Action Issued in Chinese Patent Application No. 201480052771.3, dated Nov. 3, 2017 (Copy Attached).
First Office Action issued in Chinese Patent Application No. 201480062297.2, dated Nov. 27, 2017 (Copy Attached).
“Second Office Action Issued in Chinese Patent Application No. 201380075253.9”, dated Oct. 20, 2017, 15 Pages.
Office Action issued in Chinese Patent U.S. Appl. No. 201480029533.0 dated Mar. 20, 2017.
Notice of Allowance dated Apr. 5, 2017 cited in U.S. Appl. No. 13/899,504 (Copy Attached).
Supplementary European Search Report issued in EPO Application No. 14801342.8 dated Apr. 10, 2017.
Bita Mazloom et al: “Dataflow Tomography”, ACM Transactions on Architecture and Code Optimization, vol. 9, No. 1, Mar. 2012, pp. 1-26.
Lienhard A et al: “Taking an object-centric view on dynamic information with object flow analysis”, Computer Languages. Systems & Structures, Pergamon, Amsterdam, NL, vol. 25, No. 1, Apr. 2009, pp. 63-79.
Extended European Search Report issued in EPO Patent Application No. 14829908.4 dated Apr. 11, 2017.
“Non-Final Office Action Issued in U.S Appl. No. 14/883,554”, dated Feb. 22, 2017, 14 Pages.
Office Action issued in Chinese Patent Application No. 201380075229.5 dated Mar. 1, 2017.
“Third Office Action Issued in Chinese Patent Application No. 201480062297.2”, dated Feb. 2, 2019, 20 Pages.
Related Publications (1)
Number Date Country
20160283362 A1 Sep 2016 US
Provisional Applications (3)
Number Date Country
61903755 Nov 2013 US
61903762 Nov 2013 US
61903768 Nov 2013 US