The present disclosure relates generally to cloud computing and cloud data storage; more particularly, to automated systems and methods for storing large amounts of real user measurement (RUM) performance data, while providing very fast access to the data for real-time, actionable business analytics.
Web analytics has become increasingly important as a tool for business intelligence, market research, and for improving the effectiveness of a website or application. In today's business world, revenue often depends on millisecond performance of a website or web application. Businesses and application owners are therefore interested in measuring real users' behaviors and experiences to understand how mobile application and web performance impact the revenue generated for their business. In response to this need, commercial products and services have been developed that measure, collect, and analyze performance data obtained directly from web and mobile users. These products and services may also help correlate the RUM data with critical business metrics in order to better understand and optimize web and application usage. By way of example, mPulse™ is a cloud-computing product offered by SOASTA® of Mountain View, Calif. that allows a business to collect and analyze all of a customer's real user data in real-time. The data collected is stored in the cloud, which allows the business customer to access and analyze large amounts of historical data to spot trends and gain business insights.
Web data analytics companies and the products they offer typically rely upon one or more web services that allows users to rent computers (referred to as “instances”) on which to run their own computer applications. Often times, the web service provides block level cloud storage volumes for use with the cloud-computing instances. In a typical architecture, a single machine instance may be connected to receive RUM data from multiple geographic locations, perform read/write services, and database management. A storage area network (SAN) connected to and accessible to the single instance provides large capacity disk array storage for all of the RUM data across multiple business customers. In this cloud-based data storage architecture, performance bottleneck, I/O throughput, and storage capacity problems can arise as the data becomes increasingly larger over time due to business growth, increased website traffic, and the addition of new customers using the analytics service. For example, a customer may want to access and run analytics on their RUM data collected over the past 30 days (which may involve a hundred or more queries accessing millions or even billions of data points). Performance problems inherent in the traditional cloud-based data storage architecture may cause the customer to experience delays lasting several minutes or longer before the system returns the analytics desired by the customer. Furthermore, system performance problems experienced by one customer can also create poor user experience for other customers who use the same system.
The present disclosure will be understood more fully from the detailed description that follows and from the accompanying drawings, which however, should not be taken to limit the invention to the specific embodiments shown, but are for explanation and understanding only.
In the following description specific details are set forth, such as data types, number generating methods, calculations, process steps, etc., in order to provide a thorough understanding of the subject matter disclosed herein. However, persons having ordinary skill in the relevant arts will appreciate that these specific details may not be needed to practice the present invention.
References throughout this description to “one embodiment”, “an embodiment”, “one example” or “an example” means that a particular feature, structure or characteristic described in connection with the embodiment or example is included in at least one embodiment. The phrases “in one embodiment”, “in an embodiment”, “one example” or “an example” in various places throughout this description are not necessarily all referring to the same embodiment or example. Furthermore, the particular features, structures or method steps may be combined in any suitable combinations and/or sub-combinations in one or more embodiments or examples.
In the context of the present application, the term “cloud” broadly refers to a collection of machine instances, storage and/or network devices that work together in concert. A “public cloud” refers to a cloud that is publically available, i.e., provided by a cloud provider that a user may access via the Internet in order to allocate cloud resources for the purpose of utilizing or deploying software programs, and also for running or executing those programs thereon. Some public clouds deliver cloud infrastructure services or Infrastructure as a Service (IaaS). By way of example, Amazon Elastic Compute Cloud (also known as “EC2™”) is a web service that allows users to rent computers (servers) on which to run their own computer applications, thereby allowing scalable deployment of applications through which a user can create a virtual machine (commonly known as an “instance”) containing any software desired. Instances are virtual computing environments. Amazon Elastic Block Store (EBS) provides block level storage volumes for use with EC2 instances. EBS volumes are well-suited for use as the primary storage for file systems and databases. Amazon EBS is particularly useful for database-style applications that frequently require random reads and writes across the data set.
The term “cloud computing” refers to a paradigm in which machine, storage, and application resources exist on a “cloud” of servers. In cloud computing shared resources, software and information are provided on-demand, like a public utility, via the Internet. Thus, cloud computing provides computation, data access, and storage resources without requiring users to know the location and other physical details of the computing infrastructure.
In the present disclosure, database “sharding” refers to a horizontal partition (i.e., a table) of data in a database. Each individual partition is referred to as a shard. Each shard may be held on a separate database server or volume instance to spread the load. Data striping, as that term is used in this disclosure, is the technique of segmenting logically sequential data, such as a file, so that consecutive segments are stored on different physical storage devices.
A “data store” is a repository of a set of data objects. A data store may be considered synonymous with a database, which can be implemented on storage devices such as disk arrays, e.g., a Redundant Array of Independent Disks (RAID), SANs, tape libraries, optical memory, solid-state memory, etc.
A “snapshot” is an incremental backup of an Amazon or other EBS volume, which means that only the blocks on the device that have changed after the most recent snapshot are saved. When a snapshot is deleted, only the data exclusive to that snapshot is removed. After writing to an Amazon EBS volume a user can periodically create a snapshot of the volume to use as a baseline for new volumes or for data backup. A new volume begins as an exact replica of the original volume that was used to create the snapshot.
The term “server” broadly refers to any combination of hardware or software embodied in a computer (i.e., a machine instance) designed to provide services to client devices or processes. A server therefore can refer to a computer that runs a server operating system from computer-executable code stored in a memory, and which is provided to the user as virtualized or non-virtualized server; it can also refer to any software or dedicated hardware capable of providing computing services.
In the context of the present disclosure, “collector servers” are servers deployed and used to receive real-user measurement data sent from a user's client device. Collector servers may also download configuration file information containing current metric and/or timer definitions to client devices responsive to polling requests sent by the client devices. Each collector server may process and aggregate the data items received. Processing may include statistical calculations, such as computing mean, average, standard deviation, and other relevant analytics/metrics.
“Consolidators” are servers deployed and utilized in a hierarchical manner to accumulate and aggregate the data and received from the collectors. Consolidators may also perform further statistical calculations on the aggregated data. The consolidators are typically configured to stream the further aggregated data and statistics to a Data Service instance that stores a final aggregated set or array of data results and analytics/metrics in one or more databases accessible to a computer or main instance. The main instance may generate an analytic dashboard in real-time from the final aggregated set or array of data results and analytics/metrics.
The term “real-time” refers to a level of computer responsiveness that a user senses as sufficiently immediate or that enables the computer to keep up with some external process (for example, to present visualizations of real user measurements as it constantly changes). Thus, real-time is a mode of computer operation in which the computer collects data, analyzes or computes with the data, reports (e.g., visually displays) and/or stores the results nearly simultaneously, e.g., within a few seconds or even milliseconds. “Run-time” denotes the time period during which a computer program is executing.
In the context of the present disclosure, the term “beacon” refers to data related to a real user's experience on a particular website, web application, or mobile application collected by a library running on the browser of a client device, and sent to a server (e.g., a collector server) via Hypertext Transfer (or Transport) Protocol (HTTP), or some other protocol. In the case of a mobile app, the data gathered may be based on definitions contained in a configuration file that is periodically downloaded to the mobile device running the mobile app. For example, every user who runs a particular mobile app on their mobile device may also automatically download a configuration file every few minutes that defines the various metrics and/or timers to be gathered and beaconed back to a server from the user's mobile device in real-time as the user runs or uses the mobile app. In the case of a website, the library may be a JavaScript library running on the browser of a client device.
The server receiving the beacon information may aggregate that data, (and also perform certain statistical calculations on the received data) along with similar data received from other users accessing the same website, web application, or mobile application. Any HTTP headers sent by the browser as part of the HTTP protocol may also be considered part of the beacon. A beacon may therefore be thought of as a page view on a website or application, but without a corresponding page. For every user who visits a particular website or application, a program, library or configuration file running the library on the user's client device measures various metrics and records data that is then sent or “beaconed” back to a collection server in real-time as the user navigates through or uses the website or application.
In one embodiment, a system and method allows scaling of cloud-based data stores in run-time to provide substantially improved 110 throughput and greatly expanded disk storage capacity in a cloud-computing product/service that allows a business to collect and analyze data beacons from real users of their website or application in real-time. In one embodiment, a software program or computer program product is provided that produces the analytic dashboard for a user or business customer to graphically display selected analytics and metrics. In another embodiment a graphical user interface (GUI) is provided on the dashboard that allows a user or customer to quickly and automatically scale and expand database storage capacity through a single operational (e.g., “right-click”) input from a mouse or other input device.
Conceptually, the computer-implemented operations performed are analogous to the biological process of meiosis. Data meiosis, in the context of the present disclosure, may be considered as broadly referring to a type of data replication or copying, wherein a single database or data store may be copied to create up to N additional databases, where N is a positive integer. A subsequent operation may compact each of the databases in the resulting set of databases, to remove part of the data in them, thereby greatly expanding the overall storage capacity. The throughput of the system is greatly expanded by the replication operation since queries can be executed in parallel on all of the N data stores.
Each of the collectors 11 captures data metrics from real users around the world who are actually visiting a website, or using a web application or a mobile app. In one embodiment, metrics such as page load times may be beaconed to collectors 11 on a predetermined time interval, e.g., every 100 ms. Collectors 11 receive and terminate beacon data received from the user client devices, process the beacon data, and then send the data results to an associated consolidator server 12. The data periodically (e.g., every 1-5 seconds) transmitted from collectors 11 to consolidators 12 may include the raw RUM data (e.g., page load time) as well as statistical results computed by the collectors (e.g., average load time, median load time, standard deviation, etc.)
Consolidators 12 are also distributed by geography. For instance, the example of
It is appreciated that each of the collectors 11 and consolidators 12 shown in
Each consolidator 12 forwards their aggregated data and metrics to a Data Service 13. Data Service 13 is a machine instance which, in one embodiment, implements a Java process for writing/reading data to/from one or more databases. Data Service 13 also implements a shard manager that partitions data stored in a database and executes algorithms for mapping historical data to produce a shard map. As shown. Data Service 13 is connected to a Database 16. Database 16 is a machine instance that provides a file system with read/write access to data store 17, which comprises a plurality of data storage volumes 18 used to store the aggregated historical data beacons and results collected from the real users of a customer's website or application.
It is appreciated that data store 17 may be configured to store data from multiple customers collected over many months or even years. In one embodiment, data store 17 is implemented in a Redundant Array of Independent Disks (RAID) provisioned in the cloud through Amazon Elastic Block Store (EBS) service. The data in each of the storage volumes 18 is available to a main computer instance 14, which is shown connected to Data Service 13. Main instance 14 may execute a program that generates an analytic dashboard having a graphical user interface that allows a user or customer to retrieve historical data from data store 17, and perform a variety of business and performance analytics on the data. In the embodiment shown, the analytic dashboard provides a graphical user interface displayed on a client device or computer 15 (e.g., a laptop).
In
Persons of skill in the art will understand that each EC2 instance is currently configured to have a maximum I/O throughput of 48K input/output operations per second (IOPS), which is a limitation on the speed of queries made to the historical data stored in volumes 18 of data store 17. A typical analytical session may comprise hundreds of queries to retrieve tens or hundreds of billions of data beacons stored in the database.
In one embodiment, an automated process is initiated through a single operational (e.g., “right-click”) input from a mouse or other input device by a user of computer 15, which process results in an immediate replication of database 16 and data store 17. A GUI generated by main instance 14 allows a user or customer to create up to N additional databases 16 and data stores 17, where N is a positive integer. Each of the newly created databases 16 and data stores 17 is a copy or clone of the original, which means that each of the newly created volumes 18 contains the exact same historical data of the original volumes. In one implementation, the replication of database volumes is implemented by taking an EC2 snapshot of the original volumes. Through the GUI provided, a user selects the database to be replicated and inputs a number indicating the number of additional database copies to be made.
Persons of skill will appreciate that the snapshot replication of the volumes 18 occurs virtually instantaneously and does not involve computationally intensive I/O operations where data is moved between volumes or written into the new volumes. Because the snapshot replication of the database volumes is instantaneous, it may occur during run-time. The RAID volumes 18 of data store 17 are momentarily frozen during the replication process. During the freeze, new data beacons that were about to be written are buffered for a few seconds until the snapshot is taken. Then new sets of the new volumes 18 are created, new instances of database 16 are provisioned, and all of the new volumes 18 are attached to the new database 16, while data continues to be written to the original database.
After all of the new data stores have been fully provisioned, an algorithm is executed that maps the location of all of the historical data for each customer across all of the databases (original plus the newly-created ones). In one embodiment the historical data is mapped across the databases by date according to a round-robin algorithm. For instance, in the example of
In one embodiment, execution of the mapping algorithm is a Java program that executes in Data Service 13, with each mapping managed by the “shard manager”. It should be understood that mapping does not move data from one database location to another. Rather, mapping tells Data Service 13 which database location it should retrieve data for a given customer for a given date. In other words, mapping tells the Java process running on Data Service 13 that if a query to retrieve the data for customer X written 3 days ago, it can found at the mapped location of a particular database.
Note that in one implementation, mapping is performed for all customers across all the data stored since the customer began using the system. Before a mapping change occurs, any query that comes into the system goes to the original data store(s). After the mapping change, any query is optimized by accessing data from all of the data stores. This entire process happens in run-time and there is no need to stop the system or take it offline to add more data stores or to perform the sharding algorithm for historical mapping. All of the queries always go through the shard manager, so queries that started before the addition of new data stores just use the existing data stores. Queries that start after new data stores have been added use all of the new set of data stores.
Persons of skill in the art will understand that this disclosed solution of scaling data has significant advantages over existing data scalability solutions used by Structured Query Language (SQL) and Not SQL (NoSQL) databases. Those existing solutions focus on adding a database node to the cluster, and then all of the other databases send some of their data to that database. This sending and receiving of data, and then deleting that received data from each of the databases, is very costly in terms of performance. While all of this sending, receiving and deleting is happening, the performance of the entire cluster goes down significantly, which affects reading (queries performance) and writing (keeping up with real-time data streams). The presently disclosed solution is better because the taking of snapshots is virtually free in terms of performance. Replica data stores are created from the snapshots without affecting the performance of the main data stores. After replication, the data in the shard manager is mapped across all of the data stores.
After database replication and mapping, a user or customer may optionally perform another single GUI right-click operation, referred to as data compaction, which drops data tables from all of the databases, resulting in greatly increased available disk space. In the compaction operation, rather than deleting rows form tables, whole tables are simply dropped. Deleting rows is very expensive in terms of performance, while dropping whole tables is virtually free, and therefore can be done in run-time.
The example of
The compaction operation is extremely fast, and can be performed, in one embodiment, by a single UI (right-click) operation during run-time because it does not involve moving data or erasing rows from data tables, which is computationally expensive. The compaction operation simply drops data tables by wiping out addresses.
In one embodiment, a user or customer may replicate any number of the existing data stores, a subset of the data stores, or a single data store, N times, where N is a positive integer.
Next, for all of the databases—both original and newly created—all of the historical data of each customer is mapped according to date according to a round-robin algorithm. (Block 82) In one embodiment, the round-robin algorithm maps data obtained for a first (earliest) date to a location in a first database, data obtained for a next (second) day in a location of the next (second) database, and so on, proceeding in a circular order around all of the available databases. After data for a particular day has been mapped to the last database, the process continues with the mapping of the data for a next date looping back to the first database. The process is akin to dealing cards (data striped by day) to players (databases) seated at a table. The process continues until all of the days of data for a given customer have been mapped among all of the databases. The same mapping process may be repeated for each customer serviced by the system.
Persons of skill will appreciate that the mapping algorithm described in the example of
The inventors have discovered that the increase in performance of the disclosed system and method is significantly greater than expected. In one experiment, a typical set of queries a certain dashboard executed (involving billions of beacons) over a given time period to a system implemented with a single database took approximately 4 minutes to retrieve the results. After a replication/splitting process which went from 1 to 4 databases, the same set of queries took about 15 seconds. In other words, the increased parallelism achieved by the disclosed embodiments resulted in an improvement in performance greater than two orders of magnitude.
Continuing with the example of
The example of
The second pass of the algorithm illustrated in the example of
At decision block 93 the system determines, for every data store, whether the total amount expected to be written to that data store is within a specific threshold amount above or below the average. In one implementation, the threshold is set at 15% above (or below) an average or mean of the size of the data in the data store(s). If the total amount for each data store is within 15% of the average, then the second pass is complete, and no changes are made to the output produced by the first pass.
On the other hand, if the threshold is exceeded, then the system rebalances the mapping for write workload by customer. (Block 94) Rebalancing is an iterative process, with small changes being made to redistribute some of the write workload as among the plurality of databases. With each change, the algorithm returns to decision block 93 to determine whether the changes made have reduced the size differential to a level that does not exceed the threshold. In certain embodiments, further re-balancing for read performance may occur.
Redistribution may involve creating an ordered list ranking customers in terms of how much data was generated for each. For example, the data striping can be adjusted to get a data store within the threshold limit (e.g., 15%) by mapping the future split such that a smallest customer is mapped to the database with a biggest customer (in terms of write workload) to re-balance the write workload to be within the threshold limit. It is appreciated that other schemes may alternatively be used to achieve rebalancing of the write load to stay within acceptable limits. The goal is to minimize the number of changes made to the mapping produced by the first pass, because each change reduces read parallelism, when a query spans data from multiple days.
In one embodiment, a replica of each data store is maintained in the event of failure. For example, the data store infrastructure may reside in a first geographic location (e.g., Portland) with a replica of each data store being maintained at that same geographic location. In another implementation, the replica data stores may be maintained at a remote geographic location. In yet another embodiment, a first replica set of data stores may be in the same geographic location as the working set of data stores, with a second replica set of data stores being maintained and running in a geographically remote location as a fail-safe backup.
Persons of skill in the cloud computing and networking fields will appreciate that the inventive concepts described in the present disclosure have application well beyond the field of web and business analytics. For instance, any Massively Parallel Processing (MPP) Database System can benefit from the ability to scale in accordance with the embodiments described herein. Since there is no performance penalty associated with adding nodes to the cluster, data storage scaling can be performed at runtime. Moreover, the disclosed scaling system, methods and computer program products optimize for both read and write performance, without sacrificing one for the other.
It should be further understood that elements of the disclosed subject matter may also be provided as a computer program product which may include a machine-readable medium having stored thereon instructions which may be used to program a computer (e.g., a processor or other electronic device) to perform a sequence of operations. Alternatively, the operations may be performed by a combination of hardware, firmware, and software. The machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, magnet or optical cards, or other type of machine-readable medium suitable for storing electronic instructions.
Additionally, although the present invention has been described in conjunction with specific embodiments, numerous modifications and alterations are well within the scope of the present invention. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
Number | Name | Date | Kind |
---|---|---|---|
5414809 | Hogan et al. | May 1995 | A |
5615347 | Davis et al. | Mar 1997 | A |
5724525 | Beyers et al. | Mar 1998 | A |
5945986 | Bargar et al. | Aug 1999 | A |
6025853 | Baldwin | Feb 2000 | A |
6092043 | Squires et al. | Jul 2000 | A |
6134582 | Kennedy | Oct 2000 | A |
6317786 | Yamane et al. | Nov 2001 | B1 |
6434513 | Sherman et al. | Aug 2002 | B1 |
6477483 | Scarlat et al. | Nov 2002 | B1 |
6542163 | Gorbet et al. | Apr 2003 | B2 |
6560564 | Scarlat et al. | May 2003 | B2 |
6563523 | Suchocki et al. | May 2003 | B1 |
6601020 | Myers | Jul 2003 | B1 |
6738933 | Fraenkel et al. | May 2004 | B2 |
6792393 | Farel et al. | Sep 2004 | B1 |
6817010 | Aizenbud-Reshef et al. | Nov 2004 | B2 |
6898556 | Smocha et al. | May 2005 | B2 |
6959013 | Muller et al. | Oct 2005 | B1 |
6975963 | Hamilton et al. | Dec 2005 | B2 |
7050056 | Meyringer | May 2006 | B2 |
7133805 | Dankenbring et al. | Nov 2006 | B1 |
7216168 | Merriam | May 2007 | B2 |
7334162 | Vakrat et al. | Feb 2008 | B1 |
7376902 | Lueckhoff | May 2008 | B2 |
7464121 | Barcia et al. | Dec 2008 | B2 |
7478035 | Wrench et al. | Jan 2009 | B1 |
7548875 | Mikkelsen et al. | Jun 2009 | B2 |
7587638 | Shah et al. | Sep 2009 | B2 |
7594238 | Takahashi | Sep 2009 | B2 |
7607169 | Njemanze et al. | Oct 2009 | B1 |
7617201 | Bedell et al. | Nov 2009 | B1 |
7630862 | Glas et al. | Dec 2009 | B2 |
7685234 | Gottfried | Mar 2010 | B2 |
7689455 | Fligler et al. | Mar 2010 | B2 |
7693947 | Judge et al. | Apr 2010 | B2 |
7725812 | Balkus et al. | May 2010 | B1 |
7743128 | Mullarkey | Jun 2010 | B2 |
7757175 | Miller | Jul 2010 | B2 |
7844036 | Gardner et al. | Nov 2010 | B2 |
7965643 | Gilbert et al. | Jun 2011 | B1 |
8015327 | Zahavi et al. | Sep 2011 | B1 |
8166458 | Li et al. | Apr 2012 | B2 |
8291079 | Colton et al. | Oct 2012 | B1 |
8306195 | Gardner et al. | Nov 2012 | B2 |
8341462 | Broda et al. | Dec 2012 | B2 |
8448148 | Kolawa et al. | May 2013 | B1 |
8464224 | Dulip et al. | Jun 2013 | B2 |
8479122 | Hotelling et al. | Jul 2013 | B2 |
8510600 | Gardner et al. | Aug 2013 | B2 |
8583777 | Boyle et al. | Nov 2013 | B1 |
9015348 | Hofmann et al. | Apr 2015 | B2 |
9015654 | Kaasila et al. | Apr 2015 | B2 |
9021362 | Broda et al. | Apr 2015 | B2 |
9154611 | Jackson et al. | Oct 2015 | B1 |
9229842 | Broda et al. | Jan 2016 | B2 |
9251035 | Vazac et al. | Feb 2016 | B1 |
9436579 | Broda et al. | Sep 2016 | B2 |
9450834 | Broda et al. | Sep 2016 | B2 |
9491248 | Broda et al. | Nov 2016 | B2 |
9495473 | Gardner et al. | Nov 2016 | B2 |
9720569 | Gardner et al. | Aug 2017 | B2 |
9990110 | Lounibos et al. | Jun 2018 | B1 |
10037393 | Polovick et al. | Jul 2018 | B1 |
20020107826 | Ramachandran et al. | Aug 2002 | A1 |
20020138226 | Doane | Sep 2002 | A1 |
20020147937 | Wolf | Oct 2002 | A1 |
20030074161 | Smocha et al. | Apr 2003 | A1 |
20030074606 | Baker | Apr 2003 | A1 |
20030109951 | Hsiung et al. | Jun 2003 | A1 |
20030195960 | Merriam | Oct 2003 | A1 |
20040010584 | Peterson et al. | Jan 2004 | A1 |
20040039550 | Myers | Feb 2004 | A1 |
20040059544 | Smocha et al. | Mar 2004 | A1 |
20040064293 | Hamilton et al. | Apr 2004 | A1 |
20040119713 | Meyringer | Jun 2004 | A1 |
20040123320 | Daily et al. | Jun 2004 | A1 |
20040205724 | Mayberry | Oct 2004 | A1 |
20050102318 | Odhner et al. | May 2005 | A1 |
20050182589 | Smocha et al. | Aug 2005 | A1 |
20050216234 | Glas et al. | Sep 2005 | A1 |
20050278458 | Berger et al. | Dec 2005 | A1 |
20060031209 | Ahlberg et al. | Feb 2006 | A1 |
20060075094 | Wen et al. | Apr 2006 | A1 |
20060229931 | Fligler et al. | Oct 2006 | A1 |
20060271700 | Kawai et al. | Nov 2006 | A1 |
20070143306 | Yang | Jun 2007 | A1 |
20070232237 | Croak et al. | Oct 2007 | A1 |
20070282567 | Dawson et al. | Dec 2007 | A1 |
20070283282 | Bonfiglio et al. | Dec 2007 | A1 |
20080059947 | Anand et al. | Mar 2008 | A1 |
20080066009 | Gardner et al. | Mar 2008 | A1 |
20080140347 | Ramsey et al. | Jun 2008 | A1 |
20080147462 | Muller | Jun 2008 | A1 |
20080189408 | Cancel et al. | Aug 2008 | A1 |
20090077107 | Scumniotales et al. | Mar 2009 | A1 |
20090210890 | Tully | Aug 2009 | A1 |
20090271152 | Barrett | Oct 2009 | A1 |
20090300423 | Ferris | Dec 2009 | A1 |
20100023867 | Coldiron et al. | Jan 2010 | A1 |
20100057935 | Kawai et al. | Mar 2010 | A1 |
20100115496 | Amichal | May 2010 | A1 |
20100198960 | Kirschnick et al. | Aug 2010 | A1 |
20100250732 | Bucknell | Sep 2010 | A1 |
20100251128 | Cordasco | Sep 2010 | A1 |
20100332401 | Prahlad | Dec 2010 | A1 |
20100333072 | Dulip et al. | Dec 2010 | A1 |
20110066591 | Moyne | Mar 2011 | A1 |
20110066892 | Gardner et al. | Mar 2011 | A1 |
20110119370 | Huang et al. | May 2011 | A1 |
20110130205 | Cho et al. | Jun 2011 | A1 |
20110202517 | Reddy et al. | Aug 2011 | A1 |
20110282642 | Kruger et al. | Nov 2011 | A1 |
20110296108 | Agrawal | Dec 2011 | A1 |
20120017165 | Gardner et al. | Jan 2012 | A1 |
20120017210 | Huggins et al. | Jan 2012 | A1 |
20120023429 | Medhi | Jan 2012 | A1 |
20120101799 | Fernandes | Apr 2012 | A1 |
20120166634 | Baumback et al. | Jun 2012 | A1 |
20120246310 | Broda et al. | Sep 2012 | A1 |
20120314616 | Hong et al. | Dec 2012 | A1 |
20120324101 | Pecjack et al. | Dec 2012 | A1 |
20130031449 | Griffiths et al. | Jan 2013 | A1 |
20130097307 | Vazac et al. | Apr 2013 | A1 |
20130116976 | Kanemasa et al. | May 2013 | A1 |
20130166634 | Holland et al. | Jun 2013 | A1 |
20130205020 | Broda et al. | Aug 2013 | A1 |
20140033055 | Gardner et al. | Jan 2014 | A1 |
20140189320 | Kuo | Jul 2014 | A1 |
20140280880 | Tellis et al. | Sep 2014 | A1 |
20150067527 | Gardner et al. | Mar 2015 | A1 |
20150222494 | Broda et al. | Aug 2015 | A1 |
20150319071 | Kaasila et al. | Nov 2015 | A1 |
Entry |
---|
Dillenseger, “CLIF, a framework based on Fractal for flexible, distributed load testing” Nov. 18, 2008, Ann. Telecommun., 64:101-120. |
Chester et al. “Mastering Excel 97”, 1994, Sybex, 4th Ed., pp. 1016, 136-137, 430, 911, 957-958. |
Malan et al. “An Extensible Probe Architecture for Network Protocol Performance Measurement”, IEEE, Oct. 1998, pp. 215-227. |
Jamin et al. “A Measurement-Based Admission Control Algorithm for Integrated Service Packet Networks”, IEEE, 1997, pp. 56-70. |
U.S. Appl. No. 15/862,503. |
U.S. Appl. No. 14/668,928. |
U.S. Appl. No. 15/066,969. |
U.S. Appl. No. 15/155,185. |
U.S. Appl. No. 15/441,718. |
U.S. Appl. No. 15/449,061. |
U.S. Appl. No. 15/591,353. |
U.S. Appl. No. 15/668,002. |