The subject matter described herein relates to testing communications network equipment. More particularly, the subject matter described herein relates to methods, systems, and computer readable media for quiescence-informed network testing.
Data network service providers (such as Internet service providers), network users, and application providers all have an interest in learning how data network traffic of a particular type is treated by the network. For example, data network service providers may wish to monitor how application traffic of different types is treated by their networks and by other networks to ensure compliance with net neutrality regulations, service level agreements with other service providers, and service level agreements with end users. Network users, including individuals, businesses, and other entities, may desire to know how application traffic is treated to identify when a network service provider is throttling traffic and whether the throttling is consistent with the service agreement between the network service providers and the users. Application services providers may wish to know how their traffic is treated in the network, for example to identify when their traffic is throttled by the network service provider.
Performance testing is one way to identify how traffic of a particular application type is being treated by the network. In some performance testing, it may be desirable to minimize the impact of the testing on network performance. For example, if tests are run during busy network periods in a live network, the quality of service experienced by network users may be reduced by the presence of test traffic. In other performance testing, it may be desirable to execute a test when the network is busy to determine how application traffic is treated in a busy network.
Accordingly, there exists a need for methods, systems, and computer readable media for quiescence-informed network testing.
The subject matter described herein includes methods, systems, and computer readable media for quiescence-informed network testing. One method for quiescence-informed network testing includes determining, by a first test agent, a quiescence state of the network. The method further includes reporting, by the first test agent and to a test controller, the quiescence state of the network. The method further includes configuring, by the test controller, the first test agent to execute a network test. The method further includes executing, by the first test agent, the network test. The method further includes reporting results of execution of the network test to the test controller.
The subject matter described herein for testing a network device using a variable burst profile may be implemented in hardware, software, firmware, or any combination thereof. As such, the terms “function” or “module” as used herein refer to hardware, software, and/or firmware for implementing the feature being described. In one exemplary implementation, the subject matter described herein may be implemented using a computer readable medium having stored thereon computer executable instructions that when executed by the processor of a computer control the computer to perform steps. Exemplary computer readable media suitable for implementing the subject matter described herein include non-transitory computer-readable media, such as disk memory devices, chip memory devices, programmable logic devices, and application specific integrated circuits. In addition, a computer readable medium that implements the subject matter described herein may be located on a single device or computing platform or may be distributed across multiple devices or computing platforms.
Embodiments of the subject matter described herein will now be explained with reference to the accompanying drawings, wherein like reference numerals represent like parts, of which:
The subject matter described herein includes methods, systems, and computer readable media for quiescence-informed network testing.
The system illustrated in
Hardware test agents 110A and 110B are designed to be deployed in live networks to perform network testing and monitoring. An example of a hardware platform suitable for implementing hardware test agents 110A and 110B includes the XR2000 Active Monitoring Hardware Agent available from Ixia of Calabasas, Calif. Test agents may also include software-only test agents 110C and 110D designed to execute on and utilize the resources of network devices 112, which may be data center file servers, access points, or other network devices. Software-only test agents 110C and 110D may perform the same functions as hardware test agents 110A and 110B. For example, software-only test agents 110C and 110D may include network monitoring functions 118 that monitor network activity and report the quiescence state of the network to test controller 100. In addition, software-only test agents 110C and 110D may include network test functions 120 that implement network tests using test configuration information obtained from test controller 100.
Hardware test agents 110A and 110B and software-only test agents 110C and 110D are configurable by test controller 100 to perform active monitoring and network testing. According to one aspect of the subject matter described herein, test controller 100 configures hardware test agents 110A and 110B and/or software-only agents 110C and 110D to detect the quiescence state of the network and report the quiescence state to test controller 100. In response to receiving notification of the quiescence state, test controller 100 may configure agents 110A-110D to perform network testing, such as performance testing to monitor user experience, service provider compliance with service level agreements, or service provider treatment of particular application traffic, if the quiescence state indicates that network activity is below a desired threshold level. Such performance testing may include sending synthetic application traffic of a particular type (such as streaming video traffic) from one test agent to another test agent over the network to monitor how the network performs for the application traffic of the particular type. For example, it may be desirable to monitor how the network treats social media traffic, such as Facebook traffic or streaming media service traffic, such as Netflix or YouTube traffic. It may be of particular interest to determine whether any of the traffic of the particular application types is being throttled by the network service provider. Performance metrics, such as bit rate, jitter, and latency may be obtained for the synthetic application traffic of the particular type and compared to service level agreements. If synthetic streaming video traffic is being transmitted over the network, the average bit rate, latency, and jitter may be recorded and compared to parameters of a network user's subscription with the user's data network service provider.
In one example, test agents 110A-110D may execute network performance tests during quiescent periods (i.e., when network activity is below a threshold level) and report the results to test controller 100. The results may include network performance measurements, such as bit rate, jitter, and latency for the simulated network traffic of the particular type. Test agents 110A-110D may refrain from executing the performance tests when the network is not in a quiescent state. By detecting quiescent periods and performing network testing only during the quiescent periods, the impact of testing on network performance is reduced.
In another example, test agents 110A-110D may execute network tests when the network is in a non-quiescent state, for example, to determine how different types of application traffic is treated during busy periods. Controller 100 may report results of such testing, along with the detected quiescence state, in a report that shows treatment of different traffic types as a function of network quiescence.
(8000 bits)*(1*10−6 s per bit)=0.008 s or 8 milliseconds
In step 3, test agent 110B receives the 1000 bytes of data from test agent 100A. In step 4, test agent 110B records the amount of time it takes to receive the 8000 bits from the transmission medium measured from the time that the first bit is received until the time that the last bit is received. In step 5, test agent 110B reports the time to receive the 1000 bytes to test agent 110A.
In step 6, test agent 110A calculates the difference between the time to transmit the 1000 bytes and the time to receive the 1000 bytes and reports the result to the test controller (not shown in
Returning to
In step 6, the test agent configures itself using the configuration data received from the test controller, and in step 7 executes the test. Continuing with the streaming video example, the test agent may stream video to another agent, and either the transmitting or receiving agent may report (step 8) results, such as bitrate, jitter, and latency to test controller 100. In step 9, the test agent returns to its monitoring mode for identifying the quiescence state of the network.
It should be noted that performance test software may be loaded on the test agents prior to initiation of the quiescence monitoring. In such an example, the test controller may configure the test agents to execute the performance test by sending messages to the test agents indicating which performance test to perform.
The subject matter described herein is not limited to the process illustrated in
In step 502, the test agent reports the quiescence state to the test controller. The quiescence state may include an indicator of the presence, absence, or degree of activity in the network.
In step 504, the test agent receives performance test configuration data from the test controller. As stated above, in one example, the tests may include application-specific network tests, which may include generating synthetic application traffic of a particular type, transmitting the traffic from one test agent to one or more other test agents over the network, and measuring performance of the network with respect to the application traffic of the particular type. Measurements that may be taken include bit rate, latency, jitter, etc.
In step 506, the test agent executes the performance test and reports test results, such as the aforementioned measurements, to the test controller. These measurements may be compared to corresponding performance parameters of a user's agreement with a service provide to determine whether the user is receiving the level of service agreed upon with the service provider. For example, the user contract for a downlink bit rate of 5 Gbps and may only receive 3 Gbps. In such a scenario it may be desirable to perform additional tests to determine the cause of the bandwidth shortfall. For example, it may be desirable to determine if other users are attached to the network during the test to determine whether the bandwidth sharing resulted in the lower bandwidth allocation to the agent running the test software. After step 506, control returns to step 500 where the process repeats. It should be noted that quiescence testing and reporting in steps 500 and 502 may be performed independently and even simultaneously with performance testing and reporting in step 506.
The subject matter described herein is not limited to only implementing network testing when the network is quiescent. For example, once the quiescence state (i.e., the degree of quiescence in the network) is determined, the test controller may instruct the test agents to conduct a test, and the controller may report the results of the test along with the quiescence state of the network. For example, if it is desirable to perform a particular test when the network is at least 50% occupied, the controller may instruct the test agents to execute the test when the quiescence report from the test agents indicates that the network is at least 50% occupied. The test agents may execute the test, and report the results to the controller. The controller may generate a report that indicates that test being performed, the results of the test, and the quiescence state during the test. It is also noted that the quiescence test, such as that illustrated in
It will be understood that various details of the subject matter described herein may be changed without departing from the scope of the subject matter described herein. Furthermore, the foregoing description is for the purpose of illustration only, and not for the purpose of limitation.
This application claims the priority benefit of U.S. Provisional Patent Application Ser. No. 62/380,946, filed Aug. 29, 2016, the disclosure of which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
6073085 | Wiley et al. | Jun 2000 | A |
6549882 | Chen et al. | Apr 2003 | B1 |
6625648 | Schwaller et al. | Sep 2003 | B1 |
6625750 | Duso | Sep 2003 | B1 |
6651093 | Wiedeman et al. | Nov 2003 | B1 |
6763380 | Mayton et al. | Jul 2004 | B1 |
7278061 | Smith | Oct 2007 | B2 |
7730029 | Molotchko et al. | Jun 2010 | B2 |
9356855 | Gintis | May 2016 | B2 |
9641419 | Gintis | May 2017 | B2 |
10205938 | Regev et al. | Feb 2019 | B2 |
10425320 | Nistor et al. | Sep 2019 | B2 |
20030086425 | Bearden et al. | May 2003 | A1 |
20030149765 | Hubbard et al. | Aug 2003 | A1 |
20040003068 | Boldman et al. | Jan 2004 | A1 |
20040066748 | Burnett | Apr 2004 | A1 |
20040193709 | Selvaggi et al. | Sep 2004 | A1 |
20050050190 | Dube | Mar 2005 | A1 |
20060146703 | Cha et al. | Jul 2006 | A1 |
20070081467 | Hurst et al. | Apr 2007 | A1 |
20080010523 | Mukherjee | Jan 2008 | A1 |
20080056131 | Balakrishnan | Mar 2008 | A1 |
20080126561 | Ryu et al. | May 2008 | A1 |
20090097409 | Akhter et al. | Apr 2009 | A1 |
20100177644 | Kucharczyk | Jul 2010 | A1 |
20110170433 | Scobbie | Jul 2011 | A1 |
20120311132 | Tychon et al. | Dec 2012 | A1 |
20140047112 | Komiya et al. | Feb 2014 | A1 |
20140169189 | Kalkunte | Jun 2014 | A1 |
20140229605 | Besser | Aug 2014 | A1 |
20140355613 | Pope et al. | Dec 2014 | A1 |
20150106670 | Gintis | Apr 2015 | A1 |
20150188788 | Kolesnik et al. | Jul 2015 | A1 |
20160080243 | Kodama | Mar 2016 | A1 |
20160087861 | Kuan et al. | Mar 2016 | A1 |
20160134864 | Regeve et al. | May 2016 | A1 |
20160182310 | Gintis | Jun 2016 | A1 |
20170171044 | Das et al. | Jun 2017 | A1 |
20170180233 | Nistor et al. | Jun 2017 | A1 |
Number | Date | Country |
---|---|---|
10-1401990 | Jun 2014 | KR |
WO 2016077084 | May 2016 | WO |
Entry |
---|
Notice of Allowance and Fee(s) Due for U.S. Appl. No. 14/581,931 (dated Dec. 19, 2016). |
“IxNetwork™ AVB Solution,” Data Sheet, Document No. 915-1889-01 Rev A, www.ixiacom.com, pp. 1-18 (Oct. 2014). |
Kreibich et al., “Netalyzr: Illuminating Edge Network Neutrality, Security, and Performance,” TR-10-006, International Computer Science Institute, pp. 1-17 (May 2010). |
“Hawkeye—Active Network Monitoring Platform,” www.ixia.com, https://www.ixiacom.com/sites/default/files/2017-08/Ixia-V-DS-Hawkeye.pdf, pp. 1-7 (Aug. 2017). |
“XR2000 Active Monitoring Hardware Endpoint for the Hawkeye™ and IxChariot® Product Lines,” www.ixia.com, https://www.ixiacom.com/sites/default/files/resources/datasheet/xr2000.pdf, pp. 1-2 (Feb. 2016). |
Notice of Allowance and Fee(s) Due for U.S. Appl. No. 14/984,459 (dated Jan. 18, 2019). |
Notice of Allowance and Fee(s) Due and Applicant-Initiated Interview Summary, for U.S. Appl. No. 14/537,851 (dated Sep. 17, 2018). |
Final Office Action for U.S. Appl. No. 14/984,459 (dated Jul. 12, 2018). |
Communication of the extended European search report for European Application No. 15858789.9 (dated May 17, 2018). |
Non-Final Office Action for U.S. Appl. No. 14/537,851 (dated Mar. 21, 2018). |
Non-Final Office Action for U.S. Appl. No. 14/984,459 (dated Dec. 1, 2017). |
Advisory Action for U.S. Appl. No. 14/537,851 (dated Sep. 20, 2017). |
Communication of European publication number and information on the application of Article 67(3) EPC for European Patent Application No. 15858789.9 (dated Aug. 23, 2017). |
Final Office Action for U.S. Appl. No. 14/537,851 (dated May 19, 2017). |
Non-Final Office Action for U.S. Appl. No. 14/537,851 (dated Oct. 6, 2016). |
Advisory Action & AFCP 2.0 Decision for U.S. Appl. No. 14/537,851 (dated Sep. 15, 2016). |
Final Office Action for U.S. Appl. No. 14/537,851 (dated Jun. 6, 2016). |
Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration for International Application No. PCT/US2015/058297 (dated Feb. 24, 2016). |
Non-Final Office Action for U.S. Appl. No. 14/537,851 (dated Nov. 20, 2015). |
“IEEE Standard for Local and metropolitan area networks—Audio Video Bridging (AVB) Systems,” IEEE Standards Association, IEEE Computer Society, IEEE Std 802.1BA™-2011, pp. 1-45 (Sep. 30, 2011). |
“IEEE Standard for Layer 2 Transport Protocol for Time-Sensitive Applications in Bridged Local Area Networks,” IEEE Standards Association, IEEE Computer Society, IEEE Std 1722™-2011, pp. 1-57 (May 6, 2011). |
“IEEE Standard for Layer 3 Transport Protocol for Time-Sensitive Applications in Local Area Networks,” IEEE Standards Association, IEEE Computer Society, IEEE Std 1733™-2011, pp. 1-21 (Apr. 25, 2011). |
Teener, Michael Johas, “No-excuses Audio/Video Networking: the Technology Behind AVnu,” AVnu™ Alliance White Paper, pp. 1-10 (Aug. 24, 2009). |
Notice of Allowance and Fee(s) Due for U.S. Appl. No. 14/984,459 (dated Apr. 15, 2019). |
Commonly-assigned, co-pending U.S. Appl. No. 16/251,365 for “Methods, Systems and Computer Readable Media for Proactive Network Testing,” (Unpublished, filed Jan. 18, 2019). |
Communication pursuant to Article 94(3) EPC for European Patent Application Serial No. 15 858 789.9 (dated Mar. 29, 2019). |
Number | Date | Country | |
---|---|---|---|
20180062972 A1 | Mar 2018 | US |
Number | Date | Country | |
---|---|---|---|
62380946 | Aug 2016 | US |