Performance testing of web application components using image differentiation

Information

  • Patent Grant
  • 9600400
  • Patent Number
    9,600,400
  • Date Filed
    Thursday, October 29, 2015
    9 years ago
  • Date Issued
    Tuesday, March 21, 2017
    7 years ago
Abstract
Systems and methods for measuring the rendering time for individual components of a software application, such as a single page application (SPA). Baseline screenshots are captured for specific screen regions associated with selected components. After invalidating associated application caches, one or more selected components are reloaded. After respective server calls complete, which time corresponds to rendering start times, test screenshots of the specified regions are captured during re-rendering and are compared to their corresponding baseline screenshots. When a match is found for a component under test, the time the match is found is marked as a rendering completion time. The rendering completion time for each component may be compared to the corresponding rendering start time to determine the amount of time taken to render each of the components of the application.
Description
BACKGROUND

Technical Field


The present disclosure generally relates to performance testing of software applications.


Description of the Related Art


Web browsers function as user-friendly portals to the Internet's resources which facilitate navigation and information search. Recently, Web browsers have evolved into more sophisticated and complex software applications. Current Web browsers incorporate sophisticated multimedia rendering functions (e.g., video, audio, images) and enable a user to navigate and interact dynamically with connected Websites and other internet users.


Uniform resource locators (URLs) are the addresses of internet resources on the World Wide Web. A common URL contains a character string “http” which identifies the URL as a resource to be retrieved over the Hypertext Transfer Protocol (HTTP). Other common prefixes are “https” (for Hypertext Transfer Protocol Secure), “ftp” (for File Transfer Protocol), and “file” (for files that are stored locally on a user's computer). If a URL points to a Website, which is often a case, the Web browser uses the URL to access the Website and retrieve the Webpage from the Website.


Once the Webpage has been retrieved by a processor-based system, the Web browser displays the Webpage on a display screen of the processor-based system. Hypertext Markup Language (HTML) documents and any associated content (e.g., text, images, video, audio) are passed to the Web browser's rendering or layout engine to be transformed from markup to an interactive document, a process known as “rendering.” Webpages frequently include other content such as formatting information (e.g., Cascading Style Sheets (CSS)) and scripts (e.g., JavaScript) into their final presentation.


A Web application is a computer software application that is coded in a Web browser-supported programming language (e.g., HTML, JavaScript) and is reliant on a Web browser to render the application executable. Available Web browsers include, for example, Internet Explorer®, Google Chrome®, Mozilla Firefox®, Opera® and Safari®.


Often, software applications are implemented as a single page Web application (SPA). An SPA is a Web application or Website that fits on a single Webpage with the goal of providing a more fluid user experience similar to a desktop or “native” application. In an SPA, either all of the necessary code (e.g., HTML, JavaScript, CSS) is retrieved with a single page load, or the appropriate resources are dynamically loaded and added to the page as necessary, usually in response to user actions. The page does not reload at any point in the session, nor does control transfer to another page, although modern Web technologies (e.g., HTML5 pushState( ) API) may provide the perception and navigability of separate logical pages in the application. Interaction with the SPA often involves dynamic communication with one or more Web servers behind the scenes.


Normally, an SPA is fully loaded in the initial page load and then page regions or components are replaced or updated with new page components loaded from one or more of the Web servers on demand. To avoid excessive downloading of unused features, an SPA may progressively download more components as they become required.


When an SPA loads, each of the components loads at different speeds, depending on the type, quantity and source of data being loaded into each of the components. For performance testing, once data has been returned from a corresponding Web service for each component, it would be advantageous for developers to be able to benchmark and track load times for each component to identify potential performance issues. Currently, there is no easy way to find out how long a single component in a single-page application takes to load without making changes to the application source code.


BRIEF SUMMARY

A processor-based system to evaluate performance of an application may be summarized as including: at least one nontransitory processor-readable medium that stores at least one of processor-executable instructions or data; and at least one processor communicably coupled to the at least one nontransitory processor-readable medium, the at least one processor: receives a selection of a plurality of user interface components of the application rendered on a display screen of the processor-based system; for each of the plurality of selected user interface components, causes a corresponding plurality of baseline screenshot images to be captured, each of the baseline screenshot images depicting a region of the display screen which renders a corresponding selected user interface component; logically associates each of the captured baseline screenshot images with a corresponding one of the selected user interface components in a nontransitory processor-readable storage medium; for each of the selected user interface components, causes a data call request for the user interface component to be sent to a server processor-based system; detects a completion of a response from the server processor-based system received by the processor-based system responsive to the sent data call request; detects a response completion time of the completion of the response from the server processor-based system, the detected response completion time indicative of a starting time at which the user interface is re-rendered responsive to the sent data call request; logically associates the response completion time as a rendering starting time for the user interface component in the nontransitory processor-readable storage medium; causes a test screenshot image of the region which renders the user interface component to be captured; determines whether the captured test screenshot image matches the baseline screenshot image for the user interface component; while the captured test screenshot image does not match the baseline screenshot image, iteratively: causes a test screenshot image of the region which renders the user interface component to be captured; and determines whether the captured test screenshot image matches the baseline screenshot image for the user interface component; responsive to a determination that the captured test screenshot image matches the baseline screenshot image, detects a rendering completed time which represents the time at which the captured test screenshot image was found to match the baseline screenshot image; and determines a rendering execution time for the user interface component based at least in part on the rendering completed time for the user interface component and the rendering starting time for the user interface component.


The at least one processor: may determine a difference between the rendering completed time for the user interface component and the rendering starting time to obtain the rendering execution time for the user interface component. The at least one processor: prior to causing a data call request for the user interface component to be sent to a server processor-based system, may invalidate at least one cache storage which stores data for at least one of the plurality of selected user interface components of the application. The at least one processor: may cause a request for the application to be reloaded to be sent to the server processor-based system. The at least one processor: may receive a selection of a plurality of user interface components from a user via a graphical user interface of the processor-based system. The at least one processor: for each of the user interface components, may receive a selection of a region of the display screen of the processor-based system via a graphical user interface of the processor-based system. The at least one processor: for each of the user interface components, may receive a selection of an alphanumeric label for the user interface component. The at least one processor: responsive to receipt of the selection of a plurality of user interface components, may determine a region for each of the plurality of corresponding baseline screenshot images. At least one region may overlap with at least one other region. Each of the determined regions may be a rectangular-shaped region. The application may include la single page web application rendered on the display screen of the processor-based system.


A method of operating a processor-based system to evaluate performance of an application may be summarized as including: receiving, by at least one processor, a selection of a plurality of user interface components of the application rendered on a display screen of the processor-based system; for each of the plurality of selected user interface components, causing, by the at least one processor, a corresponding plurality of baseline screenshot images to be captured, each of the baseline screenshot images depicting a region of the display screen which renders a corresponding selected user interface component; logically associating, by the at least one processor, each of the captured baseline screenshot images with a corresponding one of the selected user interface components in a nontransitory processor-readable storage medium; for each of the selected user interface components, causing, by the at least one processor, a data call request for the user interface component to be sent to a server processor-based system; detecting, by the at least one processor, a completion of a response from the server processor-based system received by the processor-based system responsive to the sent data call request; detecting, by the at least one processor, a response completion time of the completion of the response from the server processor-based system, the detected response completion time indicative of a starting time at which the user interface is re-rendered responsive to the sent data call request; logically associating, by the at least one processor, the response completion time as a rendering starting time for the user interface component in the nontransitory processor-readable storage medium; causing, by the at least one processor, a test screenshot image of the region which renders the user interface component to be captured; determining, by the at least one processor, whether the captured test screenshot image matches the baseline screenshot image for the user interface component; while the captured test screenshot image does not match the baseline screenshot image, iteratively: causing, by the at least one processor, a test screenshot image of the region which renders the user interface component to be captured; and determining, by the at least one processor, whether the captured test screenshot image matches the baseline screenshot image for the user interface component; responsive to determining the captured test screenshot image matches the baseline screenshot image, detecting, by the at least one processor, a rendering completed time which represents the time at which the captured test screenshot image was found to match the baseline screenshot image; and determining, by the at least one processor, a rendering execution time for the user interface component based at least in part on the rendering completed time for the user interface component and the rendering starting time for the user interface component.


Determining a rendering execution time may include determining, by the at least one processor, a difference between the rendering completed time for the user interface component and the rendering starting time to obtain the rendering execution time for the user interface component. The method may further include: prior to causing a data call request for the user interface component to be sent to a server processor-based system, invalidating at least one cache storage which stores data for at least one of the plurality of selected user interface components of the application. Causing a data call request for the user interface component to be sent to a server processor-based system may include causing a request for the application to be reloaded to be sent to the server processor-based system. Receiving a selection of a plurality of user interface components may include receiving a selection of a plurality of user interface components from a user via a graphical user interface of the processor-based system. Receiving a selection of a plurality of user interface components may include, for each of the user interface components, receiving a selection of a region of the display screen of the processor-based system via a graphical user interface of the processor-based system. Receiving a selection of a plurality of user interface components may include, for each of the user interface components, receiving a selection of an alphanumeric label for the user interface component. The method may further include: responsive to receiving the selection of a plurality of user interface components, determining, by the at least one processor, a region for each of the plurality of corresponding baseline screenshot images. Determining a region for each of the plurality of corresponding baseline screenshot images may include determining a region for each of the plurality of corresponding baseline screenshot images, at least one region overlaps with at least one other region. Determining a region for each of the plurality of corresponding baseline screenshot images may include determining a rectangular-shaped region for each of the plurality of corresponding baseline screenshot images. Receiving, by at least one processor, a selection of a plurality of user interface components of the application rendered on a display screen of the processor-based system may include receiving a selection of a plurality of user interface components of a single page web application rendered on the display screen of the processor-based system.


A method of operating a processor-based system to evaluate performance of an application may be summarized as including: for each of a plurality of user interface components, causing, by the at least one processor, a corresponding plurality of baseline screenshot images to be captured, each of the baseline screenshot images depicting a region which renders a corresponding user interface component; causing, by the at least one processor, a reload request for the application to be sent to a server processor-based system; for each of the user interface components, detecting, by the at least one processor, a completion of a response from the server processor-based system received by the processor-based system; detecting, by the at least one processor, a response completion time of the completion of the response from the server processor-based system; logically associating, by the at least one processor, the response completion time as a rendering starting time for the user interface component in a nontransitory processor-readable storage medium; causing, by the at least one processor, a test screenshot image of the region which renders the user interface component to be captured; determining, by the at least one processor, whether the captured test screenshot image matches the baseline screenshot image for the user interface component; while the captured test screenshot image does not match the baseline screenshot image, iteratively: causing, by the at least one processor, a test screenshot image of the region which renders the user interface component to be captured; and determining, by the at least one processor, whether the captured test screenshot image matches the baseline screenshot image for the user interface component; responsive to determining the captured test screenshot image matches the baseline screenshot image, detecting, by the at least one processor, a rendering completed time which represents the time at which the captured test screenshot image was found to match the baseline screenshot image; and determining, by the at least one processor, a rendering execution time for the user interface component based at least in part on the rendering completed time for the user interface component and the rendering starting time for the user interface component.





BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

In the drawings, identical reference numbers identify similar elements or acts. The sizes and relative positions of elements in the drawings are not necessarily drawn to scale. For example, the shapes of various elements and angles are not necessarily drawn to scale, and some of these elements may be arbitrarily enlarged and positioned to improve drawing legibility. Further, the particular shapes of the elements as drawn, are not necessarily intended to convey any information regarding the actual shape of the particular elements, and may have been solely selected for ease of recognition in the drawings.



FIG. 1 is a functional block diagram of a server processor-based system and a client processor-based system implementing a single page application, according to one illustrated implementation.



FIG. 2 is a functional block diagram of a Web browser executable on a client processor-based system, according to one illustrated implementation.



FIG. 3 is a functional block diagram of a server processor-based system and a client processor-based system, according to one illustrated implementation.



FIG. 4 is a flow diagram for a method of performance testing individual components of a software application, according to one illustrated implementation.



FIG. 5 is a screenshot of a single page application which includes a plurality of individual components which are rendered separately, according to one illustrated implementation.



FIG. 6 is a screenshot of the single page application which shows selection of a region corresponding to a component of the application which is to be tested, according to one illustrated implementation.



FIG. 7 is a screenshot of the single page application when the single page application is reloading, according to one illustrated implementation.



FIG. 8 is a screenshot of the single page application when the component selected for performance testing is reloading, according to one illustrated implementation.



FIG. 9 is a screenshot of the single page application when the component selected for performance testing has been reloaded, according to one illustrated implementation.





DETAILED DESCRIPTION

In the following description, certain specific details are set forth in order to provide a thorough understanding of various disclosed implementations. However, one skilled in the relevant art will recognize that implementations may be practiced without one or more of these specific details, or with other methods, components, materials, etc. In other instances, well-known structures associated with computer systems, server computers, and/or communications networks have not been shown or described in detail to avoid unnecessarily obscuring descriptions of the implementations.


Unless the context requires otherwise, throughout the specification and claims that follow, the word “comprising” is synonymous with “including,” and is inclusive or open-ended (i.e., does not exclude additional, unrecited elements or method acts).


Reference throughout this specification to “one implementation” or “an implementation” means that a particular feature, structure or characteristic described in connection with the implementation is included in at least one implementation. Thus, the appearances of the phrases “in one implementation” or “in an implementation” in various places throughout this specification are not necessarily all referring to the same implementation. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more implementations.


As used in this specification and the appended claims, the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. It should also be noted that the term “or” is generally employed in its sense, including “and/or” unless the context clearly dictates otherwise.


The headings and Abstract of the Disclosure provided herein are for convenience only and do not interpret the scope or meaning of the implementations.


Implementations of the present disclosure are directed to systems and methods for improving the performance of processor-based systems by benchmarking and/or optimizing software applications executing on the processor-based systems. In one or more implementations, such functionality is achieved by measuring the rendering time for individual components of a software application, such as a single page application (SPA).


Generally, with a known dataset loaded in an application executing on a processor-based system, a user may identify a plurality of components of the application which are to be benchmarked, as well as corresponding calls to one or more servers for data. Baseline screenshots may be captured for those specific screen regions associated with the selected components. The baseline screenshots and component information may be stored as a configuration file in a nontransitory processor-readable storage medium of the processor-based system, for example.


After invalidating associated application caches, the processor-based system may reload one or more components of the application based on the stored configuration file. The processor-based system may mark when an identified server data call response completes as a rendering starting time for a component, and may begin iteratively taking test screenshots of the specified regions and comparing each test screenshot to its corresponding baseline screenshot. When a match is found, or a timeout is reached, the processor-based system may stop processing for the particular screen region. If a match is found, the processor-based system marks the time the match is found as a rendering completion time for the component. Once all regions have completed rendering, the processor-based system may subtract the rendering completion time from the rendering start time to determine the amount of time it took for the client application to render that component. The implementations discussed herein are browser-, platform- and application-agnostic, which allow such implementations to benchmark rendering performance in a variety of scenarios.



FIG. 1 illustrates a schematic functional diagram for a single page application (SPA) provided by one or more server processor-based systems 100 to a client or user processor-based system 102 over a network, such as the Internet. Example functional components of the server processor-based systems 100 and the client processor-based system 102 are shown in FIG. 3 and discussed below. The client processor-based system 102 may execute a Web browser 104, such as a Web Browser 200 shown in FIG. 2. The client processor-based system 102 may also execute a performance testing application 106 which implements the testing/benchmarking functionality discussed below.


In a traditional Web application executing in a browser, each time the application calls the server 100, the server renders a new page (e.g., HTML page). This action triggers a page refresh in the browser. In a single page application, the application may send an initial request to the server 100, which may respond by providing a new page which is rendered by the browser 104 of the client processor-based system 102. Unlike traditional Web applications, in a single page application, after the first page loads, all interaction with the server 100 happens through data requests (e.g., Asynchronous JavaScript and XML (AJAX) calls) for particular components. These data requests cause the server(s) 100 to return data (e.g., JavaScript Object Notation (JSON) formatted data) for the particular component. As shown in FIG. 1, for each data request for components 1-N, the server(s) provide corresponding data responses 1-N. The single page application uses the data received from the server(s) 100 to update the page dynamically, without reloading the page.



FIG. 2 shows example functional components of a Web browser 200 which may be used to present a web application, such as a single page application, on a user processor-based system. The Web browser 200 may be executed by any suitable processor-based system, such a desktop computer, laptop computer, tablet computer, smartphone, wearable computer, etc. As discussed above, the main functionality of the browser 200 is to present the application by requesting the application from one or more servers and displaying the application in a browser window. The resource format may usually be HTML but may also be PDF, image, video, etc.


The browser 200 includes several components including a user interface 202, a browser engine 204, a rendering or layout engine 206, a networking module 208, a user interface backend 210, a JavaScript interpreter 212, and data storage 214. The user interface 202 may include an address bar, back/forward button, bookmarking menu, etc. Generally, the user interface 202 may include each part of the browser display except the main window where content is displayed. The browser engine 204 may be the interface for querying and manipulating the rendering engine 206. The networking module 208 may be used for network calls, such as HTTP requests or component data requests. The networking module 208 may have a platform independent interface. The user interface backend 210 may be used for drawing basic widgets, such as combo boxes or windows. The user interface backend 210 may expose a generic interface which is not platform specific. The JavaScript interpreter 212 may be used to parse and execute JavaScript code. The data storage 214 may be a persistence layer which is used by the browser 200 to save various data (e.g., cookies).


The rendering engine 206 is generally responsible for displaying the requested content. For example, if the requested content is HTML, the rendering engine 206 is responsible for parsing the HTML and CSS and displaying the parsed content on the screen of the client processor-based system.



FIG. 3 shows a networked environment 300 comprising one or more server computer systems 302 (only one illustrated) and one or more associated nontransitory computer- or processor-readable storage medium 304 (only one illustrated). The associated nontransitory computer- or processor-readable storage medium 304 is communicatively coupled to the server computer system(s) 302 via one or more communications channels, for example, one or more parallel cables, serial cables, or wireless channels capable of high speed communications, for instance, via FireWire®, Universal Serial Bus® (USB) 2 or 3, and/or Thunderbolt®, Gigabyte Ethernet®.


The networked environment 300 also includes one or more client or user processor-based systems 306 (only one illustrated). For example, the user processor-based systems 306 may be representative of the client processor-based system 102 of FIG. 1. The user processor-based systems 306 are communicatively coupled to the server computer system(s) 302 by one or more communications channels, for example, one or more wide area networks (WANs) 310, for instance the Internet or Worldwide Web portion thereof.


In operation, the user processor-based systems 306 typically function as a client to the server computing system 302. In operation, the server computer systems 302 typically functions as a server to receive requests or information from the user processor-based systems 306.


The networked environment 300 may employ other computer systems and network equipment, for example, additional servers, proxy servers, firewalls, routers and/or bridges. The server computer systems 302 will at times be referred to in the singular herein, but this is not intended to limit the implementations to a single device since in typical implementations there may be more than one server computer systems 302 involved. Unless described otherwise, the construction and operation of the various blocks shown in FIG. 3 are of conventional design. As a result, such blocks need not be described in further detail herein, as they will be understood by those skilled in the relevant art.


The server computer systems 302 may include one or more processing units 312a, 312b (collectively 312), a system memory 314 and a system bus 316 that couples various system components, including the system memory 314 to the processing units 312. The processing units 312 may be any logic processing unit, such as one or more central processing units (CPUs) 312a, digital signal processors (DSPs) 312b, application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), etc. The system bus 316 can employ any known bus structures or architectures, including a memory bus with memory controller, a peripheral bus, and/or a local bus. The system memory 314 includes read-only memory (“ROM”) 318 and random access memory (“RAM”) 320. A basic input/output system (“BIOS”) 322, which can form part of the ROM 318, contains basic routines that help transfer information between elements within the server computer system(s) 302, such as during start-up.


The server computer systems 302 may include a hard disk drive 324 for reading from and writing to a hard disk 326, an optical disk drive 328 for reading from and writing to removable optical disks 332, and/or a magnetic disk drive 330 for reading from and writing to magnetic disks 334. The optical disk 332 can be a CD-ROM, while the magnetic disk 334 can be a magnetic floppy disk or diskette. The hard disk drive 324, optical disk drive 328 and magnetic disk drive 330 may communicate with the processing unit 312 via the system bus 316. The hard disk drive 324, optical disk drive 328 and magnetic disk drive 330 may include interfaces or controllers (not shown) coupled between such drives and the system bus 316, as is known by those skilled in the relevant art. The drives 324, 328 and 330, and their associated computer-readable media 326, 332, 334, provide nonvolatile storage of computer-readable instructions, data structures, program modules and other data for the server computer system 302. Although the depicted server computer systems 302 is illustrated employing a hard disk 324, optical disk 328 and magnetic disk 330, those skilled in the relevant art will appreciate that other types of computer-readable media that can store data accessible by a computer may be employed, such as WORM drives, RAID drives, magnetic cassettes, flash memory cards, digital video disks (“DVD”), Bernoulli cartridges, RAMs, ROMs, smart cards, etc.


Program modules can be stored in the system memory 314, such as an operating system 336, one or more application programs 338, other programs or modules 340 and program data 342. The system memory 314 may also include communications programs, for example, a server 344 that causes the server computer system 302 to serve electronic information or files via the Internet, intranets, extranets, telecommunications networks, or other networks as described below. The server 344 in the depicted implementation is markup language based, such as Hypertext Markup Language (HTML), Extensible Markup Language (XML) or Wireless Markup Language (WML), and operates with markup languages that use syntactically delimited characters added to the data of a document to represent the structure of the document. A number of suitable servers may be commercially available such as those from Mozilla, Google, Microsoft and Apple Computer.


While shown in FIG. 3 as being stored in the system memory 314, the operating system 336, application programs 338, other programs/modules 340, program data 342 and server 344 can be stored on the hard disk 326 of the hard disk drive 324, the optical disk 332 of the optical disk drive 328 and/or the magnetic disk 334 of the magnetic disk drive 330.


An operator can enter commands and information into the server computer system(s) 302 through input devices such as a touch screen or keyboard 346 and/or a pointing device such as a mouse 348, and/or via a graphical user interface. Other input devices can include a microphone, joystick, game pad, tablet, scanner, etc. These and other input devices are connected to one or more of the processing units 312 through an interface 350 such as a serial port interface that couples to the system bus 316, although other interfaces such as a parallel port, a game port or a wireless interface or a universal serial bus (“USB”) can be used. A monitor 352 or other display device is coupled to the system bus 316 via a video interface 354, such as a video adapter. The server computer system(s) 302 can include other output devices, such as speakers, printers, etc.


The server computer systems 302 can operate in a networked environment 300 using logical connections to one or more remote computers and/or devices. For example, the server computer systems 302 can operate in a networked environment 300 using logical connections to one or more user processor-based systems 306. Communications may be via a wired and/or wireless network architecture, for instance, wired and wireless enterprise-wide computer networks, intranets, extranets, and/or the Internet. Other implementations may include other types of communications networks including telecommunications networks, cellular networks, paging networks, and other mobile networks. There may be any variety of computers, switching devices, routers, bridges, firewalls and other devices in the communications paths between the server computer systems 302, the user processor-based systems 306.


The user processor-based systems 306 will typically take the form of end user processor-based devices, for instance, personal computers (e.g., desktop or laptop computers), net book computers, tablet computers, smartphones, personal digital assistants, vehicle head units, wearable computers, workstation computers and/or mainframe computers, and the like, executing appropriate instructions. These user processor-based systems 306 may be communicatively coupled to one or more server computers. For instance, user processor-based systems 306 may be communicatively coupled externally via one or more end user client entity server computers (not shown), which may implement a firewall. The server computers 302 may execute a set of server instructions to function as a server for a number of user processor-based systems 306 (e.g., clients) communicatively coupled via a LAN at a facility or site, and thus act as intermediaries between the user processor-based systems 306 and the server computer system(s) 302. The user processor-based systems 306 may execute a set of client instructions to function as a client of the server computer(s), which are communicatively coupled via a WAN.


The user processor-based systems 306 may include one or more processing units 368, system memories 369 and a system bus (not shown) that couples various system components including the system memory 369 to the processing unit 368. The user processor-based systems 306 will, at times, each be referred to in the singular herein, but this is not intended to limit the implementations to a single user processor-based systems 306. In typical implementations, there may be more than one user processor-based system 306 and there will likely be a large number of user processor-based systems 306.


The processing unit 368 may be any logic processing unit, such as one or more central processing units (CPUs), digital signal processors (DSPs), application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), graphical processing units (GPUs), etc. Non-limiting examples of commercially available computer systems include, but are not limited to, an 80×86 or Pentium series microprocessor from Intel Corporation, U.S.A., a PowerPC microprocessor from IBM, a Sparc microprocessor from Sun Microsystems, Inc., a PA-RISC series microprocessor from Hewlett-Packard Company, a 68xxx series microprocessor from Motorola Corporation, an ATOM processor, a Snapdragon processor from Qualcomm, an Exynos processor from Samsung, or an Ax processor from Apple. Unless described otherwise, the construction and operation of the various blocks of the user processor-based systems 306 shown in FIG. 3 are of conventional design. As a result, such blocks need not be described in further detail herein, as they will be understood by those skilled in the relevant art.


The system bus can employ any known bus structures or architectures, including a memory bus with memory controller, a peripheral bus, and a local bus. The system memory 369 includes read-only memory (“ROM”) 370 and random access memory (“RAM”) 372. A basic input/output system (“BIOS”) 371, which can form part of the ROM 370, contains basic routines that help transfer information between elements within the end user client computer systems 306, such as during start-up.


The user processor-based systems 306 may also include one or more media drives 373, e.g., a hard disk drive, magnetic disk drive, WORM drive, and/or optical disk drive, for reading from and writing to computer-readable storage media 374, e.g., hard disk, optical disks, and/or magnetic disks. The nontransitory computer-readable storage media 374 may, for example, take the form of removable media. For example, hard disks may take the form of a Winchester drive, and optical disks can take the form of CD-ROMs, while magnetic disks can take the form of magnetic floppy disks or diskettes. The media drive(s) 373 communicate with the processing unit 368 via one or more system buses. The media drives 373 may include interfaces or controllers (not shown) coupled between such drives and the system bus, as is known by those skilled in the relevant art. The media drives 373, and their associated nontransitory computer-readable storage media 374, provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for the user processor-based systems 306. Although described as employing computer-readable storage media 374 such as hard disks, optical disks and magnetic disks, those skilled in the relevant art will appreciate that user processor-based systems 306 may employ other types of nontransitory computer-readable storage media that can store data accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks (“DVD”), Bernoulli cartridges, RAMs, ROMs, smart cards, etc. Data or information, for example, electronic or digital files or data or metadata related to such can be stored in the nontransitory computer-readable storage media 374.


Program modules, such as an operating system, one or more Web browsers, one or more application programs, other programs or modules and program data, can be stored in the system memory 369. Program modules may include instructions for accessing a Website, extranet site or other site or services (e.g., Web services) and associated Webpages, other pages, screens or services hosted by the server computer system 114.


In particular, the system memory 369 may include communications programs that permit the user processor-based systems 306 to exchange electronic or digital information or files or data or metadata with the server computer system 302. The communications programs may, for example, be a Web client or browser that permits the user processor-based systems 306 to access and exchange information, files, data and/or metadata with sources such as Websites of the Internet, corporate intranets, extranets, or other networks. Such may require that the user processor-based systems 306 have sufficient right, permission, privilege or authority for accessing a given Website, for example, one hosted by the sever computer system(s) 302. The browser may, for example, be markup language based, such as Hypertext Markup Language (HTML), Extensible Markup Language (XML) or Wireless Markup Language (WML), and may operate with markup languages that use syntactically delimited characters added to the data of a document to represent the structure of the document.


While described as being stored in the system memory 369, the operating system, application programs, other programs/modules, program data and/or browser can be stored on the computer-readable storage media 374 of the media drive(s) 373. An operator can enter commands and information into the user processor-based systems 306 via a user interface 375 through input devices such as a touch screen or keyboard 376 and/or a pointing device 377 such as a mouse. Other input devices can include a microphone, joystick, game pad, tablet, imager, scanner, etc. These and other input devices are connected to the processing unit 368 through an interface such as a serial port interface that couples to the system bus, although other interfaces such as a parallel port, a game port or a wireless interface or a universal serial bus (“USB”) can be used. A display or monitor 378 may be coupled to the system bus via a video interface, such as a video adapter. The user processor-based systems 306 can include other output devices, such as speakers, printers, etc.



FIG. 4 shows a flow diagram for a method 400 of operation of a processor-based system to test performance of a single page application. The method 400 may be implemented by a performance testing application (e.g., the performance testing application 106 of FIG. 1) executing on a user processor-based system, for example. FIGS. 5, 6, 7, 8 and 9 illustrates screenshots 500, 600, 700, 800 and 900, respectively, of the single page application at times which correspond to acts of the method 400.


The method 400 starts at 402. For example, the method 400 may start with a known dataset loaded in an SPA executing in a browser of a processor-based system.



FIG. 5 shows the screenshot 500 of the single page application. The application includes a plurality of individual components 502A-502K (collectively, “components 502”) positioned at various regions 504A-504K (collectively, “regions 504”), respectively, within the display screen of the application. In the example of FIG. 5, each of the regions 504A-504K defined by a rectangular border is populated with data from a separate server data call and is rendered individually.


At 404, at least one processor of the processor-based system may identify one or more of the components to benchmark. For example, as shown in the screenshot 600 of FIG. 6, at least one processor may identify the component 502K (and the corresponding region 504K) as a component for which the rendering time is to be measured.


At least one processor may receive the selection/identification of the components/regions in many different ways. For example, the user may select a region on the display screen of the processor-based system by “drawing” a rectangle around the region using a suitable input device (e.g., mouse, touchscreen, keyboard, stylus). As another example, the user may select one or more components from a list of components presented via a suitable graphical user interface, such as a drop down list, selectable icons, selectable image(s), etc. In some implementations, the user may select a particular region of the display screen which causes a prompt to be presented to the user to input a label for the component contained in the region. In some implementations, the processor-based system may automatically determine a label or name for a component based on the user's selection of a region of the display screen which includes the component.


At 406, at least one processor of the processor-based system may capture a baseline screenshot for each selected component. In the example of FIG. 6, at least one processor may capture a baseline screenshot for the region 504K which corresponds to the selected component 502K. It is noted that although only one component is shown as being selected in the illustrated example for explanatory purposes, in practice a user may select numerous components of the application. Each of the captured baseline screenshots may be logically associated with a corresponding component in a configuration file stored in a nontransitory processor-readable storage medium of the processor-based system.


At 408, at least one processor may clear or invalidate any cached data associated with any of the selected components which are to be tested. Such ensures each of the selected components are fully re-rendered using data from the respective server call and not rendered using data existing in a cache of the processor-based system.


At 410, at least one processor causes the application to be reloaded. As discussed above, such may be achieved by causing data requests for each of the components to be sent to respective servers for each of the components. In some implementations, at least one processor causes the full application to be reloaded.


At 412, for each screen region, at least one processor marks the time when the associated server response completes. Such time maybe referred to as a start rendering time for each component. The screenshot 700 of FIG. 7 shows the application when at least one processor detects the completion of the associated server response for the selected component 502K at a time 11:31:48.270.


At 414, at least one processor of the processor-based system may capture a test screenshot of each screen region associated with a selected component of the application. The screenshot 800 of FIG. 8 shows the application when the region 504K corresponding to the component 502K has not been rendered, so the test screenshot captured at this time does not match the baseline screenshot for the region.


At 416, at least one processor may determine whether the captured test screenshot for each region matches a corresponding baseline screenshot image for the region. For each region, if the test screenshot does not match the baseline screenshot image (i.e., 416=“no”), the at least one processor may iteratively capture test screenshots and compare each to a corresponding baseline screenshot image until a match is found. At 418, at least one processor may check to see if a timeout condition is reached so that the method 400 does not continue indefinitely when no match is found.


One or more of a number of suitable algorithms may be used to determine whether a test screenshot image matches a baseline screenshot image. Non-limiting examples of matching algorithms include pixel-by-pixel based matching, feature based matching (e.g., scale-invariant feature transform (SIFT), speeded up robust features (SURF), Features from accelerated segment test (FAST)), least squares matching, correlation matching, genetic algorithm matching, and area based matching. The matching algorithm may utilize one or more thresholds which control the degree of similarity required for a test screenshot image to be considered a sufficient match with a corresponding baseline screenshot image.


At 420, upon detecting that a test screenshot image for a region matches its corresponding baseline screenshot image, at least one processor may mark the time at which the match was found as the rendering completion time for that region/component. For example, the screenshot 900 of FIG. 9 shows that the region 504K corresponding to the component 502K has been fully re-rendered, so the test screenshot captured at this time matches the baseline screenshot for the region. As indicated in FIG. 9, the time at which the match was found in this example is 11:31:48.880.


At 422, at least one processor may subtract the rendering completion time (“match found time”) for each region from its corresponding rendering start time to obtain the rendering execution time or “rendering time” for the component. In the example of FIG. 9, the rendering time for the component 502K is determined to be 11:31.48.880−11:31.48.270=610 milliseconds.


At 424, at least one processor may log the determined rendering times for each of the selected components of the application in a nontransitory processor-readable storage medium of the processor-based system. For components where no match was found, at least one processor may log that a timeout condition was reached which indicates that the rendering time was not measured for that particular component of the application. Such rendering times may be used by developers to assess and improve the performance of individual components in the application.


At 426, the method 400 ends until started again. For example, method 400 may be started each time rendering times for individual components of an application are to be tested. Further, the method 400 may be implemented autonomously as one test in a test suite including a plurality of tests, for example.


A non-limiting example of pseudocode which may be used to implement the method 400 of FIG. 4 is provided below:

















namespace PerfImageDiffPOC




{




 class Program




 {




  PerfCore pc = new PerfCore( );




  pc.GetPerformanceNumbers(some parameters defining




  screen regions to validate);




 }




 public class PerfCore




 {




  public PerfCore( )




  public Dictionary<string,




  PerfResult>GetPerformanceNumbers(parameters)




  {




   List crList = new List( );




   //Based on parameter inputs, create a list of screen regions




   to process




   foreach screenRegion in parameters.regions




   {




    crList.Add(new ComponentRegion(screenRegion));




   }




   //Queue up threads to handle the processing




   foreach(ComponentRegion cr in crList)




   {




    ThreadPool.QueueUserWorkItem(cr.GetPerformance);




   }




   //Wait for all the processing to take place




   WaitHandle.WaitAll( );




   Dictionary results = new Dictionary( );




   //Parse the results and add them to the output list




   foreach(string s in PerfResults.Keys)




   {




    results.Add(s, PerfResults[s])




   }




   return results;




  }




 }




 public class PerfResult




 {




  public TimeSpan elapsed;




 }




 internal class ComponentRegion




 {




  internal Bitmap baseImage;




  internal Point origin;




  internal Size size;




  internal DateTime timeOut;




  internal bool matched;




  internal string name;




  private ManualResetEvent _doneEvent;




  public ComponentRegion( )




  {




  }




  //This function handles grabbing the screen regions




  to be compared




  internal void GetPerformance( )




  {




   DateTime start = DateTime.Now;




   //Define the region to compare




   Rectangle rc = new Rectangle(origin, size);




   bool matchFound = false;




   while (DateTime.Now < timeOut)




   {




    memoryImage = Place an image into memory of




    the specified screen region




    //baseImage is the pre-rendered screen region




    //the memoryImage is the screen region as it appears




    right now




    matchFound = CompareBitmapImages(baseImage,




    memoryImage);




    if (matchFound)




     break;




   }




   TimeSpan elapsed = DateTime.Now.Subtract(start);




   PerfResult pr = new PerfResult( );




   pr.elapsed = timeElapsed;




   pr.matched = matchFound;




   PerfCore.PerfResults.TryAdd(name, pr);




  }




  //This function handles comparing images




  private bool CompareBitImages(Bitmap sourceImage,




  Bitmap currentImage)




  {




   bool result = Algorithmically calculate image match between




sourceImage and currentImage;




   return result;




  }




 }




}









The implementations discussed herein provide numerous advantages. For example, the implementations of the present disclosure do not require any changes to the source code of the application under test. Such feature makes the implementations discussed herein browser-, platform- and application-agnostic, which allow such implementations to benchmark rendering performance in a variety of scenarios.


Further, the implementations discussed herein measure actual rendering time, not merely completion of code execution. Existing solutions which are used to measure rendering time actually only measure when processor-executable code has completely finished executing. Such existing solutions may not take into account any browser lag in processing and rendering the individual components. Because the implementations discussed herein capture images of the actual display screen, and do not analyze the processor-executable code, such implementations detect when the rendering of individual components has actually completed, thus providing a more precise view of true rendering performance.


Additionally, the implementations discussed herein are very accurate. In some implementations, for example, the process of capturing a test screenshot and analyzing the baseline rendered image against the test screenshot takes less than 100 milliseconds, even for full screen rendering. Smaller screen regions, such as the regions highlighted in FIG. 5, may only take 10-30 milliseconds or less to process, for example. Thus, the measured rendering time for each individual component is usually within 100 milliseconds or less of actual rendering time.


The foregoing detailed description has set forth various implementations of the devices and/or processes via the use of block diagrams, schematics, and examples. Insofar as such block diagrams, schematics, and examples contain one or more functions and/or operations, it will be understood by those skilled in the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof. In one implementation, the present subject matter may be implemented via Application Specific Integrated Circuits (ASICs). However, those skilled in the art will recognize that the implementations disclosed herein, in whole or in part, can be equivalently implemented in standard integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more controllers (e.g., microcontrollers) as one or more programs running on one or more processors (e.g., microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and or firmware would be well within the skill of one of ordinary skill in the art in light of this disclosure.


Those of skill in the art will recognize that many of the methods or algorithms set out herein may employ additional acts, may omit some acts, and/or may execute acts in a different order than specified.


In addition, those skilled in the art will appreciate that the mechanisms taught herein are capable of being distributed as a program product in a variety of forms, and that an illustrative implementation applies equally regardless of the particular type of signal bearing media used to actually carry out the distribution. Examples of signal bearing media include, but are not limited to, the following: recordable type media such as floppy disks, hard disk drives, CD ROMs, digital tape, and computer memory.


The various implementations described above can be combined to provide further implementations. Aspects of the implementations can be modified, if necessary, to employ systems, circuits and concepts of the various patents, applications and publications to provide yet further implementations.


These and other changes can be made to the implementations in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific implementations disclosed in the specification and the claims, but should be construed to include all possible implementations along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure.

Claims
  • 1. A processor-based system to evaluate performance of an application, the system comprising: at least one nontransitory processor-readable medium that stores at least one of processor-executable instructions or data; andat least one processor communicably coupled to the at least one nontransitory processor-readable medium, the at least one processor: receives a selection of a plurality of user interface components of the application rendered on a display screen of the processor-based system;for each of the plurality of selected user interface components, causes a corresponding plurality of baseline screenshot images to be captured, each of the baseline screenshot images depicting a region of the display screen which renders a corresponding selected user interface component;logically associates each of the captured baseline screenshot images with a corresponding one of the selected user interface components in a nontransitory processor-readable storage medium;for each of the selected user interface components, causes a data call request for the user interface component to be sent to a server processor-based system;detects a completion of a response from the server processor-based system received by the processor-based system responsive to the sent data call request;detects a response completion time of the completion of the response from the server processor-based system, the detected response completion time indicative of a starting time at which the user interface is re-rendered responsive to the sent data call request;logically associates the response completion time as a rendering starting time for the user interface component in the nontransitory processor-readable storage medium;causes a test screenshot image of the region which renders the user interface component to be captured;determines whether the captured test screenshot image matches the baseline screenshot image for the user interface component;while the captured test screenshot image does not match the baseline screenshot image, iteratively: causes a test screenshot image of the region which renders the user interface component to be captured; anddetermines whether the captured test screenshot image matches the baseline screenshot image for the user interface component;responsive to a determination that the captured test screenshot image matches the baseline screenshot image, detects a rendering completed time which represents the time at which the captured test screenshot image was found to match the baseline screenshot image; anddetermines a rendering execution time for the user interface component based at least in part on the rendering completed time for the user interface component and the rendering starting time for the user interface component.
  • 2. The system of claim 1 wherein the at least one processor: determines a difference between the rendering completed time for the user interface component and the rendering starting time to obtain the rendering execution time for the user interface component.
  • 3. The system of claim 1 wherein the at least one processor: prior to causing a data call request for the user interface component to be sent to a server processor-based system, invalidates at least one cache storage which stores data for at least one of the plurality of selected user interface components of the application.
  • 4. The system of claim 1 wherein the at least one processor: causes a request for the application to be reloaded to be sent to the server processor-based system.
  • 5. The system of claim 1 wherein the at least one processor: receives a selection of a plurality of user interface components from a user via a graphical user interface of the processor-based system.
  • 6. The system of claim 1 wherein the at least one processor: for each of the user interface components, receives a selection of a region of the display screen of the processor-based system via a graphical user interface of the processor-based system.
  • 7. The system of claim 1 wherein the at least one processor: for each of the user interface components, receiving a selection of an alphanumeric label for the user interface component.
  • 8. The system of claim 1 wherein the at least one processor: responsive to receipt of the selection of a plurality of user interface components, determines a region for each of the plurality of corresponding baseline screenshot images.
  • 9. The system of claim 8 wherein at least one region overlaps with at least one other region.
  • 10. The system of claim 8 wherein each of the determined regions is a rectangular-shaped region.
  • 11. The system of claim 1 wherein the application comprises a single page web application rendered on the display screen of the processor-based system.
  • 12. A method of operating a processor-based system to evaluate performance of an application, the method comprising: receiving, by at least one processor, a selection of a plurality of user interface components of the application rendered on a display screen of the processor-based system;for each of the plurality of selected user interface components, causing, by the at least one processor, a corresponding plurality of baseline screenshot images to be captured, each of the baseline screenshot images depicting a region of the display screen which renders a corresponding selected user interface component;logically associating, by the at least one processor, each of the captured baseline screenshot images with a corresponding one of the selected user interface components in a nontransitory processor-readable storage medium;for each of the selected user interface components, causing, by the at least one processor, a data call request for the user interface component to be sent to a server processor-based system;detecting, by the at least one processor, a completion of a response from the server processor-based system received by the processor-based system responsive to the sent data call request;detecting, by the at least one processor, a response completion time of the completion of the response from the server processor-based system, the detected response completion time indicative of a starting time at which the user interface is re-rendered responsive to the sent data call request;logically associating, by the at least one processor, the response completion time as a rendering starting time for the user interface component in the nontransitory processor-readable storage medium;causing, by the at least one processor, a test screenshot image of the region which renders the user interface component to be captured;determining, by the at least one processor, whether the captured test screenshot image matches the baseline screenshot image for the user interface component;while the captured test screenshot image does not match the baseline screenshot image, iteratively: causing, by the at least one processor, a test screenshot image of the region which renders the user interface component to be captured; anddetermining, by the at least one processor, whether the captured test screenshot image matches the baseline screenshot image for the user interface component;responsive to determining the captured test screenshot image matches the baseline screenshot image, detecting, by the at least one processor, a rendering completed time which represents the time at which the captured test screenshot image was found to match the baseline screenshot image; anddetermining, by the at least one processor, a rendering execution time for the user interface component based at least in part on the rendering completed time for the user interface component and the rendering starting time for the user interface component.
  • 13. The method of claim 12 wherein determining a rendering execution time comprises determining, by the at least one processor, a difference between the rendering completed time for the user interface component and the rendering starting time to obtain the rendering execution time for the user interface component.
  • 14. The method of claim 12, further comprising: prior to causing a data call request for the user interface component to be sent to a server processor-based system, invalidating at least one cache storage which stores data for at least one of the plurality of selected user interface components of the application.
  • 15. The method of claim 12 wherein causing a data call request for the user interface component to be sent to a server processor-based system comprises causing a request for the application to be reloaded to be sent to the server processor-based system.
  • 16. The method of claim 12 wherein receiving a selection of a plurality of user interface components comprises receiving a selection of a plurality of user interface components from a user via a graphical user interface of the processor-based system.
  • 17. The method of claim 12 wherein receiving a selection of a plurality of user interface components comprises, for each of the user interface components, receiving a selection of a region of the display screen of the processor-based system via a graphical user interface of the processor-based system.
  • 18. The method of claim 12 wherein receiving a selection of a plurality of user interface components comprises, for each of the user interface components, receiving a selection of an alphanumeric label for the user interface component.
  • 19. The method of claim 12, further comprising: responsive to receiving the selection of a plurality of user interface components, determining, by the at least one processor, a region for each of the plurality of corresponding baseline screenshot images.
  • 20. The method of claim 19 wherein determining a region for each of the plurality of corresponding baseline screenshot images comprises determining a region for each of the plurality of corresponding baseline screenshot images, at least one region overlaps with at least one other region.
  • 21. The method of claim 19 wherein determining a region for each of the plurality of corresponding baseline screenshot images comprises determining a rectangular-shaped region for each of the plurality of corresponding baseline screenshot images.
  • 22. The method of claim 12 wherein receiving, by at least one processor, a selection of a plurality of user interface components of the application rendered on a display screen of the processor-based system comprises receiving a selection of a plurality of user interface components of a single page web application rendered on the display screen of the processor-based system.
  • 23. A method of operating a processor-based system to evaluate performance of an application, the method comprising: for each of a plurality of user interface components, causing, by the at least one processor, a corresponding plurality of baseline screenshot images to be captured, each of the baseline screenshot images depicting a region which renders a corresponding user interface component;causing, by the at least one processor, a reload request for the application to be sent to a server processor-based system;for each of the user interface components, detecting, by the at least one processor, a completion of a response from the server processor-based system received by the processor-based system;detecting, by the at least one processor, a response completion time of the completion of the response from the server processor-based system;logically associating, by the at least one processor, the response completion time as a rendering starting time for the user interface component in a nontransitory processor-readable storage medium;causing, by the at least one processor, a test screenshot image of the region which renders the user interface component to be captured;determining, by the at least one processor, whether the captured test screenshot image matches the baseline screenshot image for the user interface component;while the captured test screenshot image does not match the baseline screenshot image, iteratively: causing, by the at least one processor, a test screenshot image of the region which renders the user interface component to be captured; anddetermining, by the at least one processor, whether the captured test screenshot image matches the baseline screenshot image for the user interface component;responsive to determining the captured test screenshot image matches the baseline screenshot image, detecting, by the at least one processor, a rendering completed time which represents the time at which the captured test screenshot image was found to match the baseline screenshot image; anddetermining, by the at least one processor, a rendering execution time for the user interface component based at least in part on the rendering completed time for the user interface component and the rendering starting time for the user interface component.
US Referenced Citations (270)
Number Name Date Kind
3970992 Boothroyd et al. Jul 1976 A
4346442 Musmanno Aug 1982 A
4347568 Giguere et al. Aug 1982 A
4359631 Lockwood et al. Nov 1982 A
4383298 Huff et al. May 1983 A
4410940 Carlson et al. Oct 1983 A
4429360 Hoffman et al. Jan 1984 A
4486831 Wheatley et al. Dec 1984 A
4491725 Pritchard Jan 1985 A
4503499 Mason et al. Mar 1985 A
4553206 Smutek et al. Nov 1985 A
4567359 Lockwood Jan 1986 A
4591974 Dornbush et al. May 1986 A
4598367 DeFrancesco et al. Jul 1986 A
4633430 Cooper Dec 1986 A
4642768 Roberts Feb 1987 A
4646229 Boyle Feb 1987 A
4646231 Green et al. Feb 1987 A
4646250 Childress Feb 1987 A
4648037 Valentino Mar 1987 A
4658351 Teng Apr 1987 A
4730252 Bradshaw Mar 1988 A
4794515 Hornung Dec 1988 A
4809170 Leblang et al. Feb 1989 A
4819156 DeLorme et al. Apr 1989 A
4831526 Luchs et al. May 1989 A
4845644 Anthias et al. Jul 1989 A
4860247 Uchida et al. Aug 1989 A
4912628 Briggs Mar 1990 A
4918588 Barrett et al. Apr 1990 A
4928243 Hodges et al. May 1990 A
4928252 Gabbe et al. May 1990 A
4949251 Griffin et al. Aug 1990 A
4951194 Bradley et al. Aug 1990 A
4959769 Cooper et al. Sep 1990 A
4985831 Dulong et al. Jan 1991 A
5072412 Henderson, Jr. et al. Dec 1991 A
5086502 Malcolm Feb 1992 A
5159669 Trigg et al. Oct 1992 A
5161226 Wainer Nov 1992 A
5170480 Mohan et al. Dec 1992 A
5175853 Kardach et al. Dec 1992 A
5201033 Eagen et al. Apr 1993 A
5220665 Coyle, Jr. et al. Jun 1993 A
5241677 Naganuma et al. Aug 1993 A
5257375 Clark et al. Oct 1993 A
5261099 Bigo et al. Nov 1993 A
5263134 Paal et al. Nov 1993 A
5265159 Kung Nov 1993 A
5282052 Johnson et al. Jan 1994 A
5317733 Murdock May 1994 A
5363214 Johnson Nov 1994 A
5448729 Murdock Sep 1995 A
5517644 Murdock May 1996 A
5530861 Diamant et al. Jun 1996 A
5537315 Mitcham Jul 1996 A
5553282 Parrish et al. Sep 1996 A
5583922 Davis et al. Dec 1996 A
5634052 Morris May 1997 A
5864340 Bertram et al. Jan 1999 A
5880724 Bertram et al. Mar 1999 A
5968125 Garrick et al. Oct 1999 A
6049877 White Apr 2000 A
6065026 Cornelia et al. May 2000 A
6128653 del Val et al. Oct 2000 A
6199079 Gupta et al. Mar 2001 B1
6240416 Immon et al. May 2001 B1
6247020 Minard Jun 2001 B1
6271846 Martinez et al. Aug 2001 B1
6272678 Imachi et al. Aug 2001 B1
6301592 Aoyama et al. Oct 2001 B1
6366920 Hoose et al. Apr 2002 B1
6377948 Kikuchi et al. Apr 2002 B2
6381744 Nanos et al. Apr 2002 B2
6385642 Chlan et al. May 2002 B1
6393407 Middleton, III et al. May 2002 B1
6393438 Kathrow et al. May 2002 B1
6405238 Votipka Jun 2002 B1
6407752 Harnett Jun 2002 B1
6430575 Dourish et al. Aug 2002 B1
6437803 Panasyuk et al. Aug 2002 B1
6463343 Emens et al. Oct 2002 B1
6490601 Markus et al. Dec 2002 B1
6510430 Oberwager et al. Jan 2003 B1
6538667 Duursma et al. Mar 2003 B1
6546405 Gupta et al. Apr 2003 B2
6592629 Cullen et al. Jul 2003 B1
6601047 Wang et al. Jul 2003 B2
6658167 Lee et al. Dec 2003 B1
6658659 Hiller et al. Dec 2003 B2
6915435 Merriam Jul 2005 B1
6918082 Gross et al. Jul 2005 B1
6978376 Giroux et al. Dec 2005 B2
6993529 Basko et al. Jan 2006 B1
6993661 Garfinkel Jan 2006 B1
7000230 Murray et al. Feb 2006 B1
7010503 Oliver et al. Mar 2006 B1
7020779 Sutherland Mar 2006 B1
7028223 Kolawa et al. Apr 2006 B1
7146495 Baldwin et al. Dec 2006 B2
7178110 Fujino Feb 2007 B2
7191195 Koyama et al. Mar 2007 B2
7206998 Pennell et al. Apr 2007 B2
7266537 Jacobsen et al. Sep 2007 B2
7299202 Swanson Nov 2007 B2
7299502 Schmeling et al. Nov 2007 B2
7318193 Kim et al. Jan 2008 B2
7321539 Ballantyne Jan 2008 B2
7322025 Reddy et al. Jan 2008 B2
7372789 Kuroda May 2008 B2
7421438 Turski et al. Sep 2008 B2
7440967 Chidlovskii Oct 2008 B2
7478064 Nacht Jan 2009 B1
7574048 Shilman et al. Aug 2009 B2
7584196 Reimer et al. Sep 2009 B2
7587327 Jacobs et al. Sep 2009 B2
7593532 Plotkin et al. Sep 2009 B2
7624189 Bucher Nov 2009 B2
7636898 Takahashi Dec 2009 B2
7650320 Nakano Jan 2010 B2
7676792 Irie et al. Mar 2010 B2
7698230 Brown et al. Apr 2010 B1
7711703 Smolen et al. May 2010 B2
7725456 Augustine May 2010 B2
7757168 Shanahan et al. Jul 2010 B1
7814078 Forman et al. Oct 2010 B1
7836291 Yim et al. Nov 2010 B2
7886046 Zeitoun et al. Feb 2011 B1
7930757 Shapiro et al. Apr 2011 B2
7949711 Chang et al. May 2011 B2
7996759 Elkady Aug 2011 B2
7996870 Suzuki Aug 2011 B2
8140589 Petri Mar 2012 B2
8146058 Sarkar et al. Mar 2012 B2
8166388 Gounares et al. Apr 2012 B2
8171404 Borchers et al. May 2012 B2
8234219 Gorczowski et al. Jul 2012 B2
8266592 Beto et al. Sep 2012 B2
8285685 Prahlad et al. Oct 2012 B2
8290971 Klawitter et al. Oct 2012 B2
8321483 Serlet et al. Nov 2012 B2
8355934 Virdhagriswaran Jan 2013 B2
8370403 Matsuki Feb 2013 B2
8438045 Erlanger May 2013 B2
8458582 Rogers et al. Jun 2013 B2
8650043 Phillips Feb 2014 B1
8667267 Garcia et al. Mar 2014 B1
8725682 Young et al. May 2014 B2
8731973 Anderson et al. May 2014 B2
8825626 Wallace et al. Sep 2014 B1
8832045 Dodd et al. Sep 2014 B2
9063932 Bryant et al. Jun 2015 B2
20010027420 Boublik et al. Oct 2001 A1
20010032092 Calver Oct 2001 A1
20020065879 Ambrose et al. May 2002 A1
20020087602 Masuda et al. Jul 2002 A1
20020120474 Hele et al. Aug 2002 A1
20020120776 Eggebraaten et al. Aug 2002 A1
20020138476 Suwa et al. Sep 2002 A1
20020194033 Huff Dec 2002 A1
20020194578 Irie et al. Dec 2002 A1
20020198743 Ariathurai et al. Dec 2002 A1
20030144887 Debber Jul 2003 A1
20030191938 Woods et al. Oct 2003 A1
20030212610 Duffy et al. Nov 2003 A1
20040039757 McClure Feb 2004 A1
20040059592 Yadav-Ranjan Mar 2004 A1
20040059740 Hanakawa et al. Mar 2004 A1
20040133606 Miloushev et al. Jul 2004 A1
20040186750 Surbey et al. Sep 2004 A1
20040193455 Kellington Sep 2004 A1
20040194026 Barrus et al. Sep 2004 A1
20040220975 Carpentier et al. Nov 2004 A1
20040230903 Elza et al. Nov 2004 A1
20040236614 Keith Nov 2004 A1
20040243969 Kamery et al. Dec 2004 A1
20040255275 Czerwonka Dec 2004 A1
20040267578 Pearson Dec 2004 A1
20050024387 Ratnakar et al. Feb 2005 A1
20050033988 Chandrashekhar et al. Feb 2005 A1
20050071203 Maus Mar 2005 A1
20050080804 Bradshaw, Jr. et al. Apr 2005 A1
20050137928 Scholl et al. Jun 2005 A1
20050144195 Hesselink et al. Jun 2005 A1
20050233287 Bulatov et al. Oct 2005 A1
20060047540 Hutten et al. Mar 2006 A1
20060059338 Feinleib et al. Mar 2006 A1
20060100912 Kumar et al. May 2006 A1
20060184452 Barnes et al. Aug 2006 A1
20060195491 Nieland et al. Aug 2006 A1
20060195494 Dietrich Aug 2006 A1
20060259524 Horton Nov 2006 A1
20070006222 Maier et al. Jan 2007 A1
20070016465 Schaad Jan 2007 A1
20070016829 Subramanian et al. Jan 2007 A1
20070061154 Markvoort et al. Mar 2007 A1
20070067772 Bustamante Mar 2007 A1
20070160070 Buchhop et al. Jul 2007 A1
20070186066 Desai et al. Aug 2007 A1
20070186214 Morgan Aug 2007 A1
20070244921 Blair Oct 2007 A1
20070244935 Cherkasov Oct 2007 A1
20070245230 Cherkasov Oct 2007 A1
20070282927 Polouetkov Dec 2007 A1
20080002830 Cherkasov et al. Jan 2008 A1
20080010542 Yamamoto et al. Jan 2008 A1
20080040690 Sakai Feb 2008 A1
20080086499 Wefers et al. Apr 2008 A1
20080091846 Dang Apr 2008 A1
20080120602 Comstock et al. May 2008 A1
20090007077 Musuvathi et al. Jan 2009 A1
20090055242 Rewari et al. Feb 2009 A1
20090119133 Yeransian et al. May 2009 A1
20090199160 Vaitheeswaran et al. Aug 2009 A1
20090271779 Clark Oct 2009 A1
20090282457 Govindavajhala Nov 2009 A1
20090287746 Brown Nov 2009 A1
20090328171 Bayus et al. Dec 2009 A1
20100060926 Smith et al. Mar 2010 A1
20100064230 Klawitter et al. Mar 2010 A1
20100064258 Gorczowski et al. Mar 2010 A1
20100091317 Williams et al. Apr 2010 A1
20100161616 Mitchell Jun 2010 A1
20100179883 Devolites Jul 2010 A1
20100199263 Clee et al. Aug 2010 A1
20100235392 McCreight et al. Sep 2010 A1
20110021250 Ickman et al. Jan 2011 A1
20110088014 Becker et al. Apr 2011 A1
20110145037 Domashchenko et al. Jun 2011 A1
20110161375 Tedder et al. Jun 2011 A1
20110173153 Domashchenko et al. Jul 2011 A1
20110184689 Awedikian et al. Jul 2011 A1
20110270975 Troup Nov 2011 A1
20110276875 McCabe et al. Nov 2011 A1
20110283177 Gates et al. Nov 2011 A1
20120150919 Brown et al. Jun 2012 A1
20120159647 Sanin et al. Jun 2012 A1
20120174069 Zavatone Jul 2012 A1
20120222014 Peretz et al. Aug 2012 A1
20120232934 Zhang et al. Sep 2012 A1
20130024418 Sitrick et al. Jan 2013 A1
20130054480 Ross et al. Feb 2013 A1
20130066901 Marcelais et al. Mar 2013 A1
20130073942 Cherkasov Mar 2013 A1
20130080760 Li et al. Mar 2013 A1
20130138555 Shishkov May 2013 A1
20130152047 Moorthi et al. Jun 2013 A1
20130212692 Sher-Jan et al. Aug 2013 A1
20130239217 Kindler et al. Sep 2013 A1
20130282406 Snyder et al. Oct 2013 A1
20130282407 Snyder et al. Oct 2013 A1
20130282408 Snyder et al. Oct 2013 A1
20130290786 Artzi et al. Oct 2013 A1
20130298256 Barnes et al. Nov 2013 A1
20140040867 Wefers et al. Feb 2014 A1
20140067428 Snyder et al. Mar 2014 A1
20140075500 B'Far et al. Mar 2014 A1
20140096262 Casso Apr 2014 A1
20140282977 Madhu et al. Sep 2014 A1
20140283098 Phegade et al. Sep 2014 A1
20140337973 Foster et al. Nov 2014 A1
20140358938 Billmaier et al. Dec 2014 A1
20150161121 Alvarez Jun 2015 A1
20150161538 Matus et al. Jun 2015 A1
20150161620 Christner Jun 2015 A1
20150169432 Sinyagin et al. Jun 2015 A1
20150215332 Curcic et al. Jul 2015 A1
20160055132 Garrison et al. Feb 2016 A1
20160103853 Kritt Apr 2016 A1
20160104132 Abbatiello et al. Apr 2016 A1
Foreign Referenced Citations (25)
Number Date Country
2646167 Oct 2004 CA
2649441 Oct 2007 CA
2761405 Jun 2012 CA
2733857 Sep 2012 CA
2737734 Oct 2012 CA
0585192 Mar 1994 EP
60-41138 Mar 1985 JP
3-282941 Dec 1991 JP
4-373026 Dec 1992 JP
11-143567 May 1999 JP
11-296452 Oct 1999 JP
0195093 Dec 2001 WO
2004088543 Oct 2004 WO
2007120771 Oct 2007 WO
2007120772 Oct 2007 WO
2007120773 Oct 2007 WO
2007120774 Oct 2007 WO
2008049871 May 2008 WO
2010030675 Mar 2010 WO
2010030676 Mar 2010 WO
2010030677 Mar 2010 WO
2010030678 Mar 2010 WO
2010030679 Mar 2010 WO
2010030680 Mar 2010 WO
2013072764 May 2013 WO
Non-Patent Literature Citations (32)
Entry
Yuan et al., “Using GUI Run-Time Sate as Feedback to Generate Test Cases,” 29th International Conference on Software Engineering, IEEE 2007, 10 pages.
Woo et al., “Meaningful interaction in web-based learning: A social constructivist interpretation,” Internet and Higher Education 10(1):15-25, 2007.
“AMS Real-Time Getting Started Guide,” AMS Services, Vertafore, Inc., 9 pages, 2008.
“VERITAS Replication Exec version 3.1 for Windows,” Administrator's Guide, pp. i-20, 49-68, and 119-160, Dec. 2004, 100 pages.
“Update insurance template according to changes in policy,” retrieved from https://www.google.com/?tbm=pts, on Sep. 24, 2012, 2 pages.
“Adobe Introduces Adobe Acrobat 3.0 Software,” PR Newswire, Jun. 3, 1996, 3 pages.
“CoreData Inc. Announces Technology and Marketing Agreement with MobileStar Network Corp.,” Business Wire, Aug. 26, 1998, 2 pages.
“CoreData Offers E-mail Connectivity for RemoteWorx,”Newsbytes News Network, Sep. 18, 1998, 1 page.
“Free Sticky Notes software—Sticky Notes program MoRUN.net Sticker Lite,” Jan. 11, 2006, retrieved from http://web.archive.org/web/20060112031435/http://www.sticky-notes.net/free/stickynotes.html, on Oct. 10, 2013, 2 pages.
“Internet lifts servers to 64 bits,” Electronic Engineering Times, Dec. 23, 1996, 3 pages.
“NotesPlusPlus,” Feb. 25, 2006, retrieved from http://web.archive.org/web/20060225020405/http://www.sharewareconnection.com/notesplusplus.htm, on Oct. 10, 2013, 2 pages.
“SPSS Unveils Aggressive Development Plans: 1999 Product Releases Will Focus on Scalability and Deployment Solutions for the Enterprise,” Business Wire, Feb. 18, 1999, 3 pages.
“Windows XP: The Complete Reference: Using Files and Folders,” Apr. 28, 2004, retrieved from http://web.archive.org/web/20040428222156/http://delltech.150m.com/XP/files/7.htm, on Oct. 10, 2013, 4 pages.
Announcement, “Coming Attraction, AMS Invites you to a Special Sneak Preview,” AMS Services, 1 page, Aug. 1, 2008.
Batrouni et al., “Method and System of Assessing Risk Associated With Users Based at Least in Part on Online Presence of the User,” U.S. Appl. No. 14/630,509, filed Feb. 24, 2015, 71 pages.
Brochure, “AMS 360—Business Growth. Productivity. Proven Technology.,” Vertafore, Inc., 8 pages, 2008.
Brown et al., “Agency Management System and Content Management System Integration,” U.S. Appl. No. 61/422,090, filed Dec. 10, 2010, 54 pages.
Corriveau et al., “AMS Portal Server: Bridging the Gap Between Web Presentation and the Back Office,” White Paper, AMS Services, 13 pages, 2008.
Extended European Search Report, dated Jul. 9, 2012, for Application No. 07755347.7, 8 pages.
Extended European Search Report, dated Jun. 14, 2012, for Application No. 07755348.5, 8 pages.
Extended European Search Report, dated Jun. 19, 2012, for Application No. 07755349.3, 8 pages.
Extended European Search Report, dated Jun. 14, 2012, for Application No. 07755350.1, 9 pages.
Fogel, “Open Source Development With CVS,” Copyright 1999, 2000, retrieved from http://web.archive.org/web/20000815211634/http://cvsbook.red-bean.com/cvsbook.ps, on Oct. 10, 2013, 218 pages.
Gadia, “A Homogeneous Relational Model and Query Languages for Temporal Databases,” ACM Transactions on Database Systems 13(4):418-448, Dec. 1988.
Gage, “Sun's ‘objective’ is to populate Java networks,” Computer Reseller News, Apr. 15, 1996, p. 69, 2 pages.
International Search Report and Written Opinion, mailed Aug. 5, 2008, for PCT/US2007/009040, 7 pages.
International Search Report and Written Opinion, mailed Jul. 18, 2008, for PCT/US2007/009041, 8 pages.
International Search Report and Written Opinion, mailed Jul. 14, 2008, for PCT/US2007/009042, 6 pages.
International Search Report and Written Opinion, mailed Jul. 18, 2008, for PCT/US2007/009043, 9 pages.
Murdock, “Office Automation System for Data Base Management and Forms Generation,” U.S. Appl. No. 07/471,290, filed Jan. 26, 1990, 163 pages.
Snodgrass et al., “Temporal Databases,” IEEE Computer, Sep. 1986, pp. 35-42.
Srivastava et al., “Automated Software Testing Using Metahurestic Technique Based on an Ant Colony Optimization,” International Symposium on Electronic System Design, Bhubaneswar, Dec. 20-22, 2010, 7 pages.