Configurable frame work for testing and analysis of client-side web browser page performance

Information

  • Patent Grant
  • 8977739
  • Patent Number
    8,977,739
  • Date Filed
    Tuesday, May 3, 2011
    13 years ago
  • Date Issued
    Tuesday, March 10, 2015
    9 years ago
Abstract
The present invention features methods, computer program products and apparatuses for measuring client-side computer system performance that features identifying one of a plurality of uniform resource locator addresses contained on a server computing system, with each of the uniform resource locator addresses being associated with computing resources. The computing the computing resources associated with any one of the plurality of uniform resource locator addresses being different from the computer resource corresponding to the remaining uniform resource locator addresses. The computing resources are accessible through a web-browser that accesses the same the computing resources associated with the one of the plurality of uniform resource locator addresses. Measured are metrics of the interactions between the web-browser and the computing resources associated with the one of the plurality of uniform resource locator addresses. It is determined whether the metrics satisfy pre-determined operational requirements.
Description
COPYRIGHT NOTICE

A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.


BACKGROUND

Web-based applications typically use a web browser to support execution. These web-based applications are required to be tested against their specified functionalities in order to verify that execution will proceed as intended. For large web-based applications that have many lines of code, automated testing is preferable, because automated testing saves substantial labor as compared to manual testing.


One example of an automated testing system is available under the trade name Selenium. Selenium is a portable software testing framework for web applications that provides a record/playback tool for authoring tests without learning a test scripting language. Selenium provides a test domain specific language (DSL) to write tests in a number of popular programming languages, including various versions of C, Java, Ruby, Groovy, Python, PHP, and Perl. Test playback is possible in most modern web browsers. Selenium deploys on Windows, Linux, and Macintosh platforms. Selenium is open source software released under the Apache 2.0 license and can be downloaded and used without charge.


Another tool used in conjunction with automated testing systems, such as Selenium is known as Jiffy-web, or Jiffy. Jiffy is advertised as a web page instrumentation and measurement suite that was first released in June of 2008. Combined these tools facilitate the testing of computer applications in an efficient manner. However, these tools fail to demonstrate the manner by which to implement the same for testing web-based computer network performance.


A need exists, therefore, to provide testing techniques for web-based computer network performance.


BRIEF SUMMARY

The present invention features methods, computer program products and apparatuses for measuring client-side computer system performance that features identifying one of a plurality of uniform resource locator addresses contained on a server computing system, with each of the uniform resource locator addresses being associated with computing resources. The computing resources associated with any one of the plurality of uniform resource locator addresses are different from the computer resource corresponding to the remaining uniform resource locator addresses. The computing resources are accessible through a web-browser that accesses the same computing resources associated with the one of the plurality of uniform resource locator addresses. Measured are metrics of the interactions between the web-browser and the computing resources associated with one of the plurality of uniform resource locator addresses. It is determined whether the metrics satisfy pre-determined operational requirements. These and other embodiments are discussed more fully below.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a simplified plan view of a computer network in which the current invention is practiced;



FIG. 2 is a plan view showing a representative architecture in which a multi-tenant database system, shown in FIG. 1, is employed;



FIG. 3 is a plan view of computer system employed by a user to communicate with the multi-tenant database shown in FIG. 2;



FIG. 4 is a detailed view of a configuration testing framework shown in FIG. 1; and



FIG. 5 is a graphical display showing web page load time trends employing the current invention.





DETAILED DESCRIPTION

Referring to FIG. 1, a computer network 10 includes a multi-tenant database architecture 12 in data communication with client side facilities 14 and a configurable test framework (CTF) 16. Components of computer network 10 may be in data communication over any type of known data communication network 18 or combination of networks of devices that communicate with one another. Data communication network 18 can be any one or any combination of a LAN (local area network), WAN (wide area network), telephone network, wireless network, point-to-point network, star network, token ring network, hub network, or other appropriate configuration. As the most common type of computer network in current use is a TCP/IP (Transfer Control Protocol and Internet Protocol) network, such as the global inter-network of networks often referred to as the “Internet”, it will be used in many of the examples herein. However, it should be understood that the networks that the present invention might use are not so limited, although TCP/IP is a frequently implemented protocol. As a result the components of network 10 may be co-located in a common geographic area and/or building or spread across a diverse area of the globe, e.g., on several different continents. Typically, client side facilities 14 and CTF 16 are in data communication with architecture 12 over the Internet using suitable computer systems. However, in other configurations CTF 16 may be included in architecture 12. Architecture 12 includes a multi-tenant database system (MTS) in which various elements of hardware and software are shared by one or more multiple users 20, 22 and 24 associated with client side facilities 14.


A given application server of MTS may simultaneously process requests for a great number of users, and a given database table may store rows for a potentially much greater number of users. To that end, and as shown in FIG. 2, architecture 12 includes a processor sub-system 28, memory space 30, in data communication therewith, and network interface resources 32 in data communication with both memory space 30 and processor sub-system 28. Processor sub-system 28 may be any known processor sub-system in the art, e.g., the CORE DUO® or the CORE 2 DUO® from Intel Corporation of Santa Clara, Calif. Memory space 30 includes drive storage 34, shown as one or more hard drives 36 and 38, as well as data and instruction registers, shown as 40, and volatile and non-volatile memory shown as 42.


Architecture 12 provides access to a database 44 by multiple users 20, 22 and 24 of client side facilities 14 over data communication network 18 using standard computer systems (not shown). To that end, network interface resources 32 include a plurality of virtual portals 45-47. Each virtual portal 45-47 provides an “instance” of a portal user interface coupled to allow access to database 44. Typically, tenants obtain rights to store information, referred to as tenant information 48 and 50, on database 44 and make the same accessible to one or more users 20, 22 and 24 to whom the tenant provides authorization. This is typically achieved by rental agreements between the tenant and an owner/provider of architecture 12. In this manner, architecture 12 provides an on-demand database service to users 20, 22 and 24 that is not necessarily concerned with building and/or maintaining the database system; rather, these functions are addressed between the tenant and the owner/provider.


With architecture 12, multiple users 20, 22 and 24 may access database 44 through a common network address, in this example a universal resource locator (URL). In response, web-pages and other content may be provided to users 20, 22 and 24 over data communication network 18. The resources of database 44 that users 20, 22 and 24 may access can be different, depending on user's 20, 22 and 24 security or permission level and/or tenant association. As a result, data structures included in tenant information 48 and 50 are managed so as to be allocated at the tenant level, while other data structures might be managed at the user level. Because architecture 12 supports multiple tenants including possible competitors, security protocols 52 and other system software 54, stored for example on hard drive 38, maintain applications and applications' use to only those users 20, 22 and 24 with proper access rights. Also, because many tenants may desire access to architecture 12 rather than maintain their own system, redundancy, up-time, and backup are additional functions that may be implemented in architecture 12.


Referring to both FIGS. 2 and 3, to facilitate web-based CRM, a user system 55 employed by one of users 20, 22 and 24 typically communicates with architecture 12 using TCP/IP and, at a higher network level, other common Internet protocols to communicate, such as HTTP, FTP, AFS, WAP, etc. To that end, user system 55 may be any computing device capable of interfacing directly or indirectly to the Internet or other network connection, such as desktop personal computer, workstation, laptop, PDA, cell phone, or any wireless access protocol (WAP) enabled device and the like running an HTTP client. An example of a user system 55 includes a processor system 56, a memory system 57, an input system 58, and output system 59. Processor system 56 may be any combination of one or more processors. Memory system 57 may be any combination of one or more memory devices, volatile, and/or non-volatile memory. A portion of memory system 57 is used to run operating system 60 in which an HTTP client 61 executes. Input system 58 may be any combination of input devices, such as one or more keyboards, mice, trackballs, scanners, cameras, and/or interfaces to networks. Output system 59 may be any combination of output devices, such as one or more displays 63, printers, and/or interfaces to networks. HTTP client 61 allows users 20, 22 and 24 of users systems 55 to access, process and view information, pages and applications available to it from server system architecture 12 over network 18. Examples of HTTP client 61 include various browsing applications, such as Microsoft's Internet Explorer browser, Netscape's Navigator browser, Opera's browser, or a WAP-enabled browser in the case of a cell phone, PDA or other wireless device, or the like. Access is gained to requisite tenant information 48 and 50 by entering the URL (not shown) into the URL box 62 of HTTP client 61. The URL directs users 20, 22 and 24 to the appropriate virtual portal for to determine authorization and permission level to access the requisite tenant information 48 and 50. In one embodiment, users 20, 22 and 24 gain access to web pages stored on database 44. The web pages are rendered in HTTP client 61.


Referring to both FIGS. 2 and 3, an important aspect with respect to providing desired user experience is to avoid the functional latency of the interaction of users 20, 22 and 24 with respect to database 44. One manner in which functional latency may detract from user experience is by HTTP client 61 requiring too great of time to parse and render a web pages. This may occur in response to changes implemented to architecture 12 and/or user system 55. One advantage for a tenant utilizing architecture 12 is that functional and/or computational improvements to the same and/or user system 55 may be provided with minimal and/or no deleterious effects to a user's experience of database 44.


Referring to FIGS. 2, 3 and 4, to minimize deleterious effects on a user's experience when implementing changes to either architecture 12 and/or user system 55, testing of the proposed changes is undertaken to determine whether the use experience is degraded as a result of the changes. The tests may be implemented on CTF 16. CTF 16 may include hundreds of computer systems (not shown), colloquially referred to as a server farm, upon which an emulation 72 of network 10 is implemented. Emulation 72 mimics the operational interaction and operational characteristics of client side facilities 14 and architecture 12. It is desired to minimize the number of man-hours required to perform the testing. Given the complexity of architecture 12 and the number of users 20, 22 and 24 that may access the same at any given time automated testing is employed. To that end, an open source browser testing framework (BTF) 74 is employed in CTF 16. One example of BTF 74 is sold under the trade name Selenium RC that is available from http://seleniumhq.org/. An instance 76 of BTF 74 is run in a browser 78 of emulation 72. Emulation 72 is established to mimic operations of user system 55 of one of users 20, 22 and 24. As a result, browser 78 is an emulation of HTTP client 61. Browser 78 executes a sequence of test methods 79, 80 and 81, referred to as a test group 82, on a web-based application under test, and referred to as a test configuration file 84, which is run on emulation 72. Specifically, BTF 74 opens web browser 78 and provides methods 79-81 that interact with browser 78 programmatically, i.e., without the need for human intervention once BTF 74 has launched. BTF 74 navigates to a specific URL address and queries the content of a web page under test 88 (WUT).


Test configuration file 84 includes test configuration data 85 and environmental setup mechanism 86. Test configuration data 85 configures individual tests and includes metadata to facilitate execution of the test. Examples of the metadata that may be included in test configuration data 85 includes a universal resource locator (URL) address of WUT 88. This may be an explicit/static URL address or it may include information that a web application (not shown) may employ to derive the URL address at which the WUT 88 is located. Additionally, metadata may include environmental configuration information so that emulation 72 accurately mimics the interactions of architecture 12 with client side facilities 14. Metadata is also included identifying metrics that must be satisfied by the test, e.g., a failure threshold for events that occur on client-side facilities 14. One example of a failure threshold would be web-page loading time, i.e., whether WUT 88 is rendered upon browser 78 in a predetermined amount of time. The time is typically measured in milliseconds. Were the interactions of client side facilities 14 and architecture 12 to occur outside of the allotted time then the test would be reported as a failure. Other metadata that may be included in configurable test data 85 identifies the owner of the test, an e-mail or other electronic address to which a communication is transmitted indicating the results of the test, e.g., whether a failure had occurred; test case tags; logical grouping of test cases or any other associated metadata specific to a given test. Finally, the metadata in test configuration data 85 is provided to communicate to environmental setup mechanism 86. An example test configuration data 85 is as follows:














<TestCases xmlns:xsi=“http://www.w3.org/2001/XMLSchema-instance”


xsi:noNamespaceSchemaLocation=“TestCases.xsd”>









<TestCase name=“testCase1” owner=“jtroup” urlType=“staticUrl”







url=“/home.jsp” setupSteps=“enableHomePage”>









<MeasuredEvent name=“pageLoadTime”



failureThreshold=“200”/>









</TestCase>



<TestCase name=“testCase2” owner=“jtroup” urlType=“derived”







DAO=“CustomerObject” view=“edit”


setupSteps=“enableCustomerObjects”>









<MeasuredEvent name=“pageLoadTime”



failureThreshold=“200”/>









</TestCase>







</TestCases>










The test configuration data 85 recited above includes two test cases: testCase1 and testCase2. Typically, test configuration data 85 is stored on CTF 16 in extended markup language (XML) The first test case, testCase1, is directed to a static URL address for BTF 64. The second test case, testCase2 provides metadata that will be used by environment setup mechanism 86 to create a new CustomerObject and construct the URL address for that object's “edit” view. Both testCase1 and testCase2 pass named steps to environmental setup mechanism 86, “enableHomePage” and “enableCustomerObjects”, respectively. The metrics established by test configuration data 65 that WUT 88 must satisfy is that WUT 88 must be rendered by browser 78 in no greater than 200 ms.


Environmental setup mechanism 86 performs any desired web application pre-test setup of emulation 72. This may include configuration to accurately emulate a desired web server configuration; database population; and web application metadata configuration that may affect the generated content of the WUT 88. Environmental setup mechanism 86 essentially allocates the computer resources of CTF 16 that are accessible by browser 78 by virtue of it establishing emulation 72. As a result, the environmental configuration metadata included in test configuration data 85 determines which steps are performed by environmental setup mechanism 86. An example of environmental setup mechanism 86 is as follows:














<SetupSteps xmlns:xsi=“http://www.w3.org/2001/XMLSchema-instance”


xsi:noNamespaceSchemaLocation=“SetupSteps.xsd”>









<SetupStep name=“enableHomePage”>









<JavaCall method=“PageUtilities.enableHomePage”/>









</SetupStep>



<SetupStep name=“enableCustomerObjects”>









<JavaCall



method=“ObjectPermissions.enableCustomerObjects”/>









</SetupStep>







</SetupSteps>










As with the test configuration data 85, the environmental setup mechanism 66 is written in XML. This environmental setup mechanism 66 calls architecture 12 logic in Java. The Java code will perform to enable the specified features.


BTF 74 is also written in Java code to communicate with Selenium to opening the WUT in browser 78 determine the time to render WUT. An example of BTF Java code is as follows:

















public int getPageLoadTime( ) {









String url = getUrlToTest( );



open(url);



return getLoadTimeFromClient( );









}











In short, the test case configuration data 85 provides a data access object (DAO) name and the name of a view for that object, e.g., “detail view”, “edit view”, etc. to environmental setup mechanism 86. Environmental setup mechanism 86 operates on the DAO to mimic, or create a mock instance, of the specified DAO and constructs a URL address to the DAO at the specified view. BTF 74 navigates to the constructed URL address gathers performance metrics on the generated page that represents a view of the DAO.


Included in WUT 88 is a client-side data provider 90 to measure the performance metrics. Client-side data provider 90 analyzes the actions of web browser 78 and generates reports concerning the same. One metric of interest is load time of WUT 88, representing the time taken for browser 78 to parse and render an HTML, used to define WUT 88, into a usable web page by browser 78. One manner in which to gather performance timings would involve comparing JavaScript timestamps at various points within client-side code. To that end, client side data provider 90 uses JavaScript to gather page load time and stores it in a table of measurement that may be retrieved by BTF 74. An example of client side data provider is Jiffy-web, or Jiffy. Jiffy is advertised as a web page instrumentation and measurement suite that was first released in June of 2008. Support for Jiffy is available from http://code.google.com/p/jiffy-web/wiki/Jiffy_js.


Also included in CTF 16 is a data storage and retrieval mechanism (DSRM) 92. DSRM 92 provides a persistent storage place for test case measurements and metadata, e.g., the time a particular test was performed, the version of browser 78, system hardware on which the test was run and the like. DSRM 70 may be stored as a plain text file on CTF 16 or architecture 12, or as any number of persistent databases like MySQL. DSRM 92 may also provide trending data in virtually any format, such as graph 72, shown in FIG. 5. As shown, graph 94 includes an x-axis representing the times individual test have been performed, and a y-axis representing event duration. The slope of line 96 illustrates an increase in page load time occurring over time. Typically the DSRM 92 is implemented in Javascript, as example of which used to generate graph 94 is as follows:














<TestCaseResults


xmlns:xsi=“http://www.w3.org/2001/XMLSchema-instance”


xsi:noNamespaceSchemaLocation=“TestCaseResults.xsd”>









<TestCaseResult testCaseName=“testCase1” dateRun=“03/20/2010”







failed=“false”>









<MeasuredEvent name=“pageLoadTime” value=“60”







failureThreshold=“200”/>









</TestCaseResult>



<TestCaseResult testCaseName=“testCase1” dateRun=“03/21/2010”







failed=“false”>









<MeasuredEvent name=“pageLoadTime” value=“75”







failureThreshold=“200”/>









</TestCaseResult>



<TestCaseResult testCaseName=“testCase1” dateRun=“03/22/2010”







failed=“false”>









<MeasuredEvent name=“pageLoadTime” value=“184”







failureThreshold=“200”/>









</TestCaseResult>



<TestCaseResult testCaseName=“testCase1” dateRun=“03/23/2010”







failed=“true”>









<MeasuredEvent name=“pageLoadTime” value=“250”







failureThreshold=“200”/>









</TestCaseResult>







</TestCaseResults>









Referring again to FIG. 4 although the CTF 16 has been described with respect to a single data configuration file 84, in practice a plurality of data configuration files would be included, three of which are shown as 84, 98 and 99. Each of the data configuration files 84, 98 and 99 would being associated with a uniform resource locator address that is different from the uniform resource locator addresses associated with the remaining data configuration files 84, 98 and 99. It should be understood that each data configuration files 84, 98 and 99 includes corresponding test configuration data, 85, 100 and 101, respectively and environmental setup mechanism 86, 102 and 103, respectively. These will allocate different resources of CTF 16 to be accessible by a browser 78, because each generates a different emulation 72. Thus, computing resources of CTF 16 allocated to browser 78 is dependent upon the uniform resource locator address to which browser 78 is directed so that different emulations 72 may occur. Thus, the computing resources associated with any one of the plurality of uniform resource locator addresses is different from the computer resource corresponding to the remaining uniform resource locator addresses of the plurality of uniform resource locators defined by data configuration files 84, 98 and 99.


The Computer code for operating and configuring network 10 to intercommunicate and to process web pages, applications and other data and media content as described herein is preferably downloaded and stored on a hard disk, but the entire program code, or portions thereof, may also be stored in any other volatile or non-volatile memory medium or device as is well known, such as a ROM or RAM, or provided on any media capable of storing program code, such as any type of rotating media including floppy disks, optical discs, digital versatile disk (DVD), compact disk (CD), microdrive, and magneto-optical disks, and magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data. Additionally, the entire program code, or portions thereof, may be transmitted and downloaded from a software source over a transmission medium, e.g., over the Internet, or from another server, as is well known, or transmitted over any other conventional network connection as is well known (e.g., extranet, VPN, LAN, etc.) using any communication medium and protocols (e.g., TCP/IP, HTTP, HTTPS, Ethernet, etc.) as are well known. It will also be appreciated that computer code for implementing embodiments of the present invention can be implemented in any programming language that can be executed on a client system and/or server or server system such as, for example, C, C++, HTML, any other markup language, Java™, JavaScript, ActiveX, any other scripting language, such as VBScript, and many other programming languages as are well known may be used. (Java™ is a trademark of Sun Microsystems, Inc.). Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.

Claims
  • 1. A method of measuring client-side computer system performance, said method comprising: identifying, by a configurable test framework in communication with a database architecture and the client-side computer system of a computer network, a test configuration file, the test configuration file including environmental configuration information and test configuration data, the test configuration data including a name for a data access object and a view for a data access object and the test configuration data identifying a first uniform resource locator address of a plurality of uniform resource locator addresses contained on a server computing system;creating, by the configurable test framework, an instance of the data access object using the test configuration data;constructing, by the configurable test framework, the first uniform resource locator address for viewing the instance of the object at the view specified by the test configuration data;establishing, by the configurable test framework, an emulation of operation of the computer network including the client-side computer system and the database architecture based on the environmental configuration information, the emulation comprising computer resources associated with the first uniform resource locator address, the computing resources being allocated based on the environmental configuration information, with the computing resources associated with the first uniform resource locator address being different from computer resources corresponding to the remaining uniform resource locator addresses;accessing, with a web-browser of the emulation, the computing resources associated with the first uniform resource locator address constructed for viewing the instance of the object at the specified view;measuring operational characteristics between said web-browser and the computing resources associated with the first uniform resource locator address, wherein the operational characteristics include a load time associated with the computing resources; anddetermining, utilizing instructions executed by the web-browser, whether said operational characteristic satisfy pre-determined operational requirements, defining measured results.
  • 2. The method as recited in claim 1 further including storing said measured results on said server computing system, with the computing resources being stored on said computer system, with said web-browser being run on said server computing system.
  • 3. The method as recited in claim 1 further including ascertain said operational characteristics failed to satisfy said pre-determined operational requirements and transmitting an electronic message to a pre-defined address including information corresponding to said results.
  • 4. The method as recited in claim 1 wherein said computer resources further includes web page content and accessing further includes said web-browser programmatically interacting with said web page content.
  • 5. The method as recited in claim 1 wherein said computer resources further includes web page content and accessing further includes rendering perceivable stimuli in said web browser in response to interacting with said web page content, with measuring further including determining a time for said web browser to render said perceivable stimuli.
  • 6. The method as recited in claim 1, the test configuration data comprising metadata stored on said server computing system, wherein identifying further includes generating, with a web application, the first uniform resource locator address from the metadata.
  • 7. The method of claim 1, wherein the emulation comprises a desired population for a database of the database architecture.
  • 8. The method of claim 1, the database architecture comprising a multi-tenant database system, wherein the emulation comprises a desired configuration for an application server of the multi-tenant database system.
  • 9. The method of claim 1, wherein: measuring the operational characteristics comprises gathering performance metrics on a generated page that represents the specified view of the object.
  • 10. A computer product of the type comprising a non-transitory computer readable medium that contains a program to measure client-side computer system performance said method comprising: computer code that identifies a test configuration file, the test configuration file including environmental configuration information and test configuration data, the test configuration data including a name for a data access object and a view for a data access object and the test configuration data identifying a first uniform resource locator address of a plurality of uniform resource locator addresses contained on a server computing system;computer code that creates an instance of the data access object using the test configuration data; computer code that constructs the first uniform resource locator address for viewing the instance of the object at the view specified by the test configuration data;computer code that establishes an emulation of operation of a computer network including the client-side computer system and a database architecture based on the environmental configuration information, the emulation comprising computing resources associated with the first uniform resource locator address, the computing resources being allocated based on the environmental configuration information, with the computing resources associated with the first uniform resource locator address being different from computer resources corresponding to the remaining uniform resource locator addresses;computer code that accesses, with a web-browser of the emulation, the computing resources associated with the first uniform resource locator address constructed for viewing the instance of the object at the specified view;computer code to measure operational characteristics between said web-browser and the computing resources associated with the first uniform resource locator address, wherein the operational characteristics include a load time associated with the computing resources; andcomputer code, executable by the web-browser, to determine whether said operational characteristics satisfy pre-determined operational requirements, defining measured results.
  • 11. The computer product as recited in claim 10 further including computer code to store said measured results on said server computing system, with the computing resources being stored on said computer system, with said web-browser being run on said server computing system.
  • 12. The computer product as recited in claim 10 further including computer code to ascertain that said operational characteristics failed to satisfy said pre-determined operational requirements and transmit an electronic message to a pre-defined address including information corresponding to said results.
  • 13. The computer program product as recited in claim 10 wherein said computer resources further includes web page content and said computer code to access further includes a subroutine to have said web-browser programmatically interact with said web page content.
  • 14. An apparatus to measuring client-side computer system performance, said method comprising: a processor;one or more stored sequences of instructions which, when executed by the processor, cause the processor to carry out the steps of:identifying a test configuration file, the test configuration file including environmental configuration information and test configuration data, the test configuration data including a name for a data access object and a view for a data access object and the test configuration data identifying a first uniform resource locator address of a plurality of uniform resource locator addresses contained on a server computing system;creating an instance of the data access object using the test configuration data;constructing the first uniform resource locator address for viewing the instance of the object at the view specified by the test configuration data;establishing an emulation of operation of the computer network including the client-side computer system and the database architecture based on the environmental configuration information, the emulation comprising computer resources associated with the first uniform resource locator address, the computing resources being allocated based on the environmental configuration information, with the computing resources associated with the first uniform resource locator address being different from computer resources corresponding to the remaining uniform resource locator addresses;accessing, with a web-browser of the emulation, the computing resources associated with the first uniform resource locator address constructed for viewing the instance of the object at the specified view;measuring, utilizing instructions executed by the web-browser, operational characteristics between said web-browser and the computing resources associated with the first uniform resource locator address, wherein the operational characteristics include a load time associated with the computing resources; anddetermining whether said operational characteristics satisfy pre-determined operational requirements, defining measured results.
  • 15. The apparatus as recited in claim 14 where said sequence of instructions further includes additional instructions, when executed by the processor, cause the processor to carry out a step of storing said measured results on said server computing system, with the computing resources being stored on said computer system.
  • 16. The apparatus as recited in claim 14 where said sequence of instructions further includes additional instruction, when executed by the processor, cause the processor to carry out a step of ascertaining that said operational characteristics failed to satisfy said pre-determined operational requirements and transmitting an electronic message to a pre-defined address including information corresponding to said results.
  • 17. A computer product of the type comprising a non-transitory computer readable medium that contains a program to measure client-side computer system performance by performing the method of claim 1.
CROSS REFERENCE TO RELATED APPLICATIONS

The present application claims priority to U.S. provisional patent application No. 61/330,838 filed May 3, 2010, entitled CONFIGURABLE FRAME WORK FOR TESTING AND ANALYSIS OF CLIENT-SIDE WEB BROWSER PAGE PERFORMANCE and identifying James Troup as inventor. This aforementioned patent application is incorporated by reference herein.

US Referenced Citations (235)
Number Name Date Kind
5072370 Durdik Dec 1991 A
5577188 Zhu Nov 1996 A
5608872 Schwartz et al. Mar 1997 A
5649104 Carleton et al. Jul 1997 A
5715450 Ambrose et al. Feb 1998 A
5761419 Schwartz et al. Jun 1998 A
5787437 Potterveld et al. Jul 1998 A
5794232 Mahlum et al. Aug 1998 A
5819038 Carleton et al. Oct 1998 A
5821937 Tonelli et al. Oct 1998 A
5831610 Tonelli et al. Nov 1998 A
5873096 Lim et al. Feb 1999 A
5918159 Fomukong et al. Jun 1999 A
5941947 Brown et al. Aug 1999 A
5950190 Yeager et al. Sep 1999 A
5963953 Cram et al. Oct 1999 A
5974409 Sanu et al. Oct 1999 A
5974453 Andersen et al. Oct 1999 A
5987471 Bodine et al. Nov 1999 A
6064656 Angal et al. May 2000 A
6085191 Fisher et al. Jul 2000 A
6092083 Brodersen et al. Jul 2000 A
6112198 Lohman et al. Aug 2000 A
6169534 Raffel et al. Jan 2001 B1
6178425 Brodersen et al. Jan 2001 B1
6189000 Gwertzman et al. Feb 2001 B1
6189011 Lim et al. Feb 2001 B1
6216135 Brodersen et al. Apr 2001 B1
6219667 Lu et al. Apr 2001 B1
6226641 Hickson et al. May 2001 B1
6233617 Rothwein et al. May 2001 B1
6233618 Shannon May 2001 B1
6256671 Strentzsch et al. Jul 2001 B1
6266669 Brodersen et al. Jul 2001 B1
6295530 Ritchie et al. Sep 2001 B1
6324568 Diec Nov 2001 B1
6324693 Brodersen et al. Nov 2001 B1
6330560 Harrison et al. Dec 2001 B1
6336137 Lee et al. Jan 2002 B1
6341288 Yach et al. Jan 2002 B1
6345288 Reed et al. Feb 2002 B1
D454139 Feldcamp et al. Mar 2002 S
6367077 Brodersen et al. Apr 2002 B1
6393605 Loomans May 2002 B1
6405220 Brodersen et al. Jun 2002 B1
6411998 Bryant et al. Jun 2002 B1
6425003 Herzog et al. Jul 2002 B1
6434550 Warner et al. Aug 2002 B1
6438562 Gupta et al. Aug 2002 B1
6446089 Brodersen et al. Sep 2002 B1
6446109 Gupta Sep 2002 B2
6453038 McFarlane et al. Sep 2002 B1
6535909 Rust Mar 2003 B1
6549908 Loomans Apr 2003 B1
6553563 Ambrose et al. Apr 2003 B2
6560461 Fomukong et al. May 2003 B1
6574635 Stauber et al. Jun 2003 B2
6577726 Huang et al. Jun 2003 B1
6578037 Wong et al. Jun 2003 B1
6601087 Zhu et al. Jul 2003 B1
6604117 Lim et al. Aug 2003 B2
6604128 Diec Aug 2003 B2
6609148 Salo et al. Aug 2003 B1
6609150 Lee et al. Aug 2003 B2
6621834 Scherpbier et al. Sep 2003 B1
6654032 Zhu et al. Nov 2003 B1
6658417 Statukis et al. Dec 2003 B1
6665648 Brodersen et al. Dec 2003 B2
6665655 Warner et al. Dec 2003 B1
6684438 Brodersen et al. Feb 2004 B2
6711565 Subramaniam et al. Mar 2004 B1
6721765 Ghosh et al. Apr 2004 B2
6724399 Katchour et al. Apr 2004 B1
6728702 Subramaniam et al. Apr 2004 B1
6728960 Loomans Apr 2004 B1
6732095 Warshavsky et al. May 2004 B1
6732100 Brodersen et al. May 2004 B1
6732111 Brodersen et al. May 2004 B2
6754681 Brodersen et al. Jun 2004 B2
6763351 Subramaniam et al. Jul 2004 B1
6763501 Zhu et al. Jul 2004 B1
6768904 Kim Jul 2004 B2
6782383 Subramaniam et al. Aug 2004 B2
6804330 Jones et al. Oct 2004 B1
6826565 Ritchie et al. Nov 2004 B2
6826582 Chatterjee et al. Nov 2004 B1
6826745 Coker Nov 2004 B2
6829655 Huang et al. Dec 2004 B1
6839680 Liu et al. Jan 2005 B1
6842748 Warner et al. Jan 2005 B1
6850895 Brodersen et al. Feb 2005 B2
6850949 Warner et al. Feb 2005 B2
6944133 Wisner et al. Sep 2005 B2
6947927 Chaudhuri et al. Sep 2005 B2
7072935 Kehoe et al. Jul 2006 B2
7076633 Tormasov et al. Jul 2006 B2
7152109 Suorsa et al. Dec 2006 B2
7174483 Becher et al. Feb 2007 B2
7185192 Kahn Feb 2007 B1
7206805 McLaughlin, Jr. Apr 2007 B1
7206807 Cheenath Apr 2007 B2
7209929 Dominguez, Jr. et al. Apr 2007 B2
7249118 Sandler et al. Jul 2007 B2
7305577 Zhang Dec 2007 B2
7308704 Vogel et al. Dec 2007 B2
7340411 Cook Mar 2008 B2
7350237 Vogel et al. Mar 2008 B2
7373364 Chapman May 2008 B1
7448079 Tremain Nov 2008 B2
7484219 Mitra Jan 2009 B2
7529728 Weissman et al. May 2009 B2
7577092 San Andres et al. Aug 2009 B2
7580975 Cheenath Aug 2009 B2
7599953 Galindo-Legaria et al. Oct 2009 B2
7620655 Larsson et al. Nov 2009 B2
7661027 Langen et al. Feb 2010 B2
7693820 Larson et al. Apr 2010 B2
7698160 Beaven et al. Apr 2010 B2
7734608 Fell et al. Jun 2010 B2
7769825 Karakashian et al. Aug 2010 B2
7774366 Fisher et al. Aug 2010 B2
7779039 Weissman et al. Aug 2010 B2
7814052 Bezar et al. Oct 2010 B2
7814470 Mamou et al. Oct 2010 B2
7827138 Salmon et al. Nov 2010 B2
7849401 Elsa et al. Dec 2010 B2
8069194 Manber et al. Nov 2011 B1
8082301 Ahlgren et al. Dec 2011 B2
8095413 Beaven Jan 2012 B1
8095594 Beaven et al. Jan 2012 B2
8275836 Beaven et al. Sep 2012 B2
20010023440 Franklin et al. Sep 2001 A1
20010044791 Richter et al. Nov 2001 A1
20020072951 Lee et al. Jun 2002 A1
20020082892 Raffel Jun 2002 A1
20020129352 Brodersen et al. Sep 2002 A1
20020133392 Angel et al. Sep 2002 A1
20020140731 Subramaniam et al. Oct 2002 A1
20020143931 Smith et al. Oct 2002 A1
20020143997 Huang et al. Oct 2002 A1
20020162090 Parnell et al. Oct 2002 A1
20020165742 Robbins Nov 2002 A1
20030004971 Gong Jan 2003 A1
20030018705 Chen et al. Jan 2003 A1
20030018830 Chen et al. Jan 2003 A1
20030066031 Laane et al. Apr 2003 A1
20030066032 Ramachandran et al. Apr 2003 A1
20030069936 Warner et al. Apr 2003 A1
20030070000 Coker et al. Apr 2003 A1
20030070004 Mukundan et al. Apr 2003 A1
20030070005 Mukundan et al. Apr 2003 A1
20030074418 Coker et al. Apr 2003 A1
20030120675 Stauber et al. Jun 2003 A1
20030151633 George et al. Aug 2003 A1
20030159136 Huang et al. Aug 2003 A1
20030187921 Diec et al. Oct 2003 A1
20030189600 Gune et al. Oct 2003 A1
20030204427 Gune et al. Oct 2003 A1
20030206192 Chen et al. Nov 2003 A1
20030225730 Warner et al. Dec 2003 A1
20040001092 Rothwein et al. Jan 2004 A1
20040010489 Rio et al. Jan 2004 A1
20040015578 Karakashian et al. Jan 2004 A1
20040015981 Coker et al. Jan 2004 A1
20040027388 Berg et al. Feb 2004 A1
20040044656 Cheenath Mar 2004 A1
20040045004 Cheenath Mar 2004 A1
20040111410 Burgoon et al. Jun 2004 A1
20040128001 Levin et al. Jul 2004 A1
20040186860 Lee et al. Sep 2004 A1
20040193510 Catahan et al. Sep 2004 A1
20040199489 Barnes-Leon et al. Oct 2004 A1
20040199536 Barnes Leon et al. Oct 2004 A1
20040199543 Braud et al. Oct 2004 A1
20040220952 Cheenath Nov 2004 A1
20040249854 Barnes-Leon et al. Dec 2004 A1
20040260534 Pak et al. Dec 2004 A1
20040260659 Chan et al. Dec 2004 A1
20040268299 Lei et al. Dec 2004 A1
20050010578 Doshi Jan 2005 A1
20050050555 Exley et al. Mar 2005 A1
20050091098 Brodersen et al. Apr 2005 A1
20050216342 Ashbaugh Sep 2005 A1
20050283478 Choi et al. Dec 2005 A1
20060095960 Arregoces et al. May 2006 A1
20060100912 Kumar et al. May 2006 A1
20060136382 Dettinger et al. Jun 2006 A1
20060248372 Aggarwal et al. Nov 2006 A1
20070078705 Abels et al. Apr 2007 A1
20070088741 Brooks et al. Apr 2007 A1
20070115845 Hochwarth et al. May 2007 A1
20070124276 Weissman et al. May 2007 A1
20070130130 Chan et al. Jun 2007 A1
20070130137 Oliver et al. Jun 2007 A1
20070150546 Karakashian et al. Jun 2007 A1
20070226640 Holbrook et al. Sep 2007 A1
20070254635 Montelius Nov 2007 A1
20080010243 Weissman et al. Jan 2008 A1
20080082540 Weissman et al. Apr 2008 A1
20080082572 Ballard et al. Apr 2008 A1
20080082986 Cheenath et al. Apr 2008 A1
20080086358 Doshi et al. Apr 2008 A1
20080086447 Weissman et al. Apr 2008 A1
20080086479 Fry et al. Apr 2008 A1
20080086482 Weissman et al. Apr 2008 A1
20080086514 Weissman et al. Apr 2008 A1
20080086567 Langen et al. Apr 2008 A1
20080086735 Cheenath et al. Apr 2008 A1
20080114875 Anastas et al. May 2008 A1
20080127092 Tomar May 2008 A1
20080162544 Weissman et al. Jul 2008 A1
20080201701 Hofhansel et al. Aug 2008 A1
20080215560 Bell et al. Sep 2008 A1
20080225760 Iyer et al. Sep 2008 A1
20080270354 Weissman et al. Oct 2008 A1
20080270987 Weissman et al. Oct 2008 A1
20090030906 Doshi et al. Jan 2009 A1
20090049065 Weissman et al. Feb 2009 A1
20090049101 Weissman et al. Feb 2009 A1
20090049102 Weissman et al. Feb 2009 A1
20090049288 Weissman et al. Feb 2009 A1
20090193521 Matsushima et al. Jul 2009 A1
20090276395 Weissman et al. Nov 2009 A1
20090276405 Weissman et al. Nov 2009 A1
20090282045 Hsieh et al. Nov 2009 A1
20090287791 Mackey Nov 2009 A1
20090319529 Bartlett et al. Dec 2009 A1
20100020715 Monaco et al. Jan 2010 A1
20100191719 Weissman et al. Jul 2010 A1
20100205216 Durdik Aug 2010 A1
20100211619 Weissman et al. Aug 2010 A1
20100223284 Brooks et al. Sep 2010 A1
20100235837 Weissman et al. Sep 2010 A1
20100274779 Weissman et al. Oct 2010 A1
20110162025 Kellerman et al. Jun 2011 A1
Foreign Referenced Citations (1)
Number Date Country
2004059420 Jul 2004 WO
Non-Patent Literature Citations (26)
Entry
(Steven Sanderson's blog, First steps with Lightweight Test Automation Framework, Mar. 27, 2009, retrieved from http://blog.stevensanderson.com/2009/03/27/first-steps-with-lightweight-test-automation-framework/, pp. 1-9).
[Online]; [published on Oct. 17, 2008]; [retrieved on Feb. 26, 2010]; retrieved from http://en.wikipedia.org/wiki/Push—technology.
[Online]; [published on Oct. 16, 2008]; [retrieved on Feb. 26, 2010]; retrieved from http://en.wikipedia.org/wiki/Customer—Relationship—Management.
[Online]; [published on Apr. 22, 2008]; [retrieved on Feb. 26, 2010]; retrieved from http://en.wikipedia.org/wiki/Flat—file—database.
[Online]; [published on Apr. 25, 2008]; [retrieved on Feb. 26, 2010]; retrieved from http://en.wikipedia.org/wiki/Relational—database.
First named inventor: Yancey, Scott, U.S. Appl. No. 12/636,658, filed Dec. 11, 2009.
First named inventor: Yancey, Scott, U.S. Appl. No. 12/636,675, filed Dec. 11, 2009.
First named inventor: Doshi, Kedar, U.S. Appl. No. 12/167,991, filed Jul. 3, 2008.
First named inventor: Bezar, Eric, U.S. Appl. No. 12/569,603, filed Sep. 2, 2010.
First named inventor: Yancey, Scott, U.S. Appl. No. 12/132,409, filed Jun. 3, 2008.
First named inventor: Durdik, Paul, U.S. Appl. No. 12/549,349, filed Aug. 27, 2009.
Lee et al: “Composition of executable business process models by combining business rules and process flows”, Expert Systems With Application, Oxford, GB, vol. 33, No. 1, Dec. 22, 2006, pp. 221-229.
Mietzer et al: “Combining Different Multi-tenancy Patterns in Service Oriented Applications”, IEE International Enterprise Distributed Object Computing Conference, NJ, USA, Sep. 1, 2009, pp. 131-140.
Wang et al: “Integrated Constraint Violation Handling for Dynamic Services Composition”, IEE International Conference on Services Computing, NJ, USA, Sep. 21, 2009, pp. 168-175.
Wermelinger et al: “Using coordination contracts for flexible adaptation to changing business rules”, Proceedings of the Sixth International Workshop on Software Evolution, NJ, USA, Sep. 1, 2003, pp. 115-120.
Wang et al: “A Study and Performance Evaluation of the Multi-Tenant Data Tier Design Patterns for Service Oriented Computing”, IEE International Conference on E-Business Engineering, NJ, USA, Oct. 22, 2008, pp. 94-101.
Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration. International Application No. PCT/US2010/050021. International Filing Date: Sep. 23, 2010.
First named inventor: Yancey, Scott, U.S. Appl. No. 12/197,979, filed Aug. 25, 2008.
First named inventor: Calahan, Patrick, U.S. Appl. No. 12/954,556, filed Nov. 24, 2010.
First named inventor: Pin, Olivier, U.S. Appl. No. 12/895,833, filed Sep. 30, 2010.
First named inventor: Tanaka, Jay, U.S. Appl. No. 12/831,196, filed Jul. 6, 2010.
First named inventor: Press, William A., U.S. Appl. No. 12/850,502, filed Aug. 4, 2010.
First named inventor: Tanaka, Jay, U.S. Appl. No. 12/831,209, filed Jul. 6, 2010.
First named inventor: Williams, Alexis, U.S. Appl. No. 13/028,236, filed Feb. 16, 2011.
First named inventor: Varadharajan, Arunkumaran, U.S. Appl. No. 12/909,820, filed Oct. 21, 2010.
First named inventor: Le Stum, Guillaume, U.S. Appl. No. 13/093,128, filed Apr. 25, 2011.
Related Publications (1)
Number Date Country
20110270975 A1 Nov 2011 US
Provisional Applications (1)
Number Date Country
61330838 May 2010 US