This application relates generally to computer network communications, and particularly to a method of and system for identifying a particular network device which contributes to poor quality of service of real-time data transmission across a network.
Popularity of IP (Internet Protocol) telephony (e.g. VoIP, video calls, etc) is increasing, and deployments of IP Telephony are correspondingly increasing in terms of number of subscribers and size of networks. The increasing number of subscribers using IP telephony for their day to day communication places increased load on network infrastructure, which leads to poorer voice quality owing to inadequate capacity or faulty infrastructure.
IP telephony places strict requirements on IP packet loss, packet delay, and delay variation (or jitter). In multi-site complex customer networks, there may be many WAN edge routers that interconnect many branches of an enterprise or of many small businesses managed by a service provider.
Probable causes of poor voice quality at the WAN edge router are Codec conversions, mismatched link speeds, and bandwidth oversubscription owing to number of users, number of links, and/or link speed. Each of these causes results in buffer overruns, leading to packet discards, which in turn degrades the quality of voice or service.
The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
a is a schematic representation of a computer system in accordance with an example embodiment.
b shows a network which includes the computer system of
a shows a table of MOS scores with their associated qualities.
b shows a table of expected MOS values based on impairment factors.
a and 4b show flow diagrams of methods, in accordance with example embodiments, to identify a network device contributing to a poor QoS in a real-time data network.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of an embodiment of the present invention. It will be evident, however, to one skilled in the art that the present invention may be practiced without these specific details.
A plurality of IP telephones 124 to 132 are connected via the switches 110 to 114 and routers 104 to 108 to the WAN 102. The IP telephones 124 to 132 may be fixed or mobile telephones, e.g. VoIP telephones. In addition, the system 100 may include a voice application server 120, such as a voicemail system, IVR (Interactive Voice Response) system, or the like, and also includes a computer system in the form of a call manager 122, in accordance with an example embodiment. It should however be noted that the example embodiments are not limited to voice only transmission but also extend to any real-time (or time critical communications) such as video.
It is to be understood that the example IP telephones 124 to 132 communicate with one another and/or with other telephones by digitising voice or other sound (or even video with video telephones) and by sending the voice data in a stream of IP packets across the WAN 102 or other network. It is important for networks carrying voice streams to provide a high quality of service (QoS) so that the voice is clear or at least audible when received by a receiver telephone. Thus, packet loss or delay is undesirable as it lowers the QoS. This is not necessarily a problem with conventional data packet transmission, e.g., with non-voice or non real-time data, as dropped packets can be retransmitted and delayed packets reassembled in due course.
a shows an example embodiment of a computer system 200 (e.g., a computer server) for implementing the methodology described herein. In an example embodiment, the computer system 200 may be configured as the Call Manager 122 of
The computer system 200 includes a processor 202 and a network interface device 206 (e.g. a network card) for communication with a network. The processor 202 comprises a plurality of conceptual modules, which corresponded to functional tasks performed by the processor 202. To this end, the computer system 200 may include a machine-readable medium, e.g. the processor 202, main memory, and/or a hard disk drive, which carries a set of instructions to direct the operation of the processor 202, for example being in the form of a computer program. More specifically, the processor 202 is shown by way of example to include: a monitoring module 210 to monitor network devices connected to the system 200; a generating module 212 to generate a sample real-time data stream; a comparing module 214 to compare quality of the sample real-time data stream with pre-defined quality criteria; a detecting module 216 to detect network devices in a network of which the system 200 forms part; and a determining module 218 to determine whether or not any detected network devices are contributing to a poor QoS. It is to be understood that the processor 202 may be one or more microprocessors, controllers, or any other suitable computing device, resource, hardware, software, or embedded logic. Furthermore, the functional modules 210 to 218 may distributed among several processors, or alternatively may be consolidated within a single processor, as shown in
It is important to note that the computer system 200 need not include all the modules 210 to 218. Accordingly, some of the conceptual modules 210 to 218 may also be distributed across different network devices. For example, in an example embodiment, the monitoring module 210, the detecting module 216, the determining module 218, and a reporting module may be provided in a network management system. Further, in an example embodiment, the generating module 212 and the detecting module 216 may be provided in a call agent. It should also be noted that the multiple module (e.g., duplicate modules) may also be provided in different devices across the network.
The monitoring module 210 monitors L3 network devices in a network to which the computer system 200 is connected. The monitoring module 210 is configured to poll or interrogate the network devices intermittently, e.g. at pre-defined monitoring intervals, to determine performance data or statistics for at least one but preferably for all interfaces on the network devices. The monitoring module 210 is particularly configured to monitor performance statistics for network routers. The performance statistics which are monitored include processor utilisation and memory utilisation of each monitored network device, for example expressing the memory utilisation of each device as a percentage of maximum memory utilisation. The monitoring module 210 may further monitor non-error IP packets which are dropped or discarded, e.g. also in the form of a percentage. The monitoring module 210 may thus, for instance, record that 10% of non-error data packets are being dropped by a particular network device (e.g., due to buffer overruns).
These performance statistics which are monitored provide an indication of whether or not the particular network device, such as router 104 to 108, is coping satisfactorily with traffic on each of its interfaces, e.g. ATM (Asynchronous Transfer Mode) interface, T1 interface, etc. The traffic statistics may be temporarily stored for later use, e.g. on a memory module connected to the computer system 200.
The generating module 212 may be operable to generate and send a sample real-time data stream (e.g., a known voice clip) to a remote network device or other computer system. The sample real-time data stream may be of a known quality, so that quality degradation can be measured. It is to be appreciated that the generating module 212 may be remote from the other modules, e.g. hosted by a router or switch. The sample stream is transmitted between two endpoints, namely a source endpoint and a destination endpoint (which may be randomly selected). The generating module 212 may serve as the source endpoint, while the destination endpoint may be a remote computer system, e.g. a router or switch. One or more network devices (e.g. WAN edge routers 104 to 108) are in the path of the sample stream, so that the quality of the data stream after transmission is influenced by the networks device(s). In other embodiments, the generating module 212 can be located on a system separate from the computer system 200, the computer system 200 optionally serving as a destination endpoint.
The comparing module 214 may compare quality of the sample real-time data stream after transmission with pre-defined quality criteria which, for example, include impairment factors such as Codec type, network topology, etc. The quality of the sample stream after transmission may be measured by a measuring module (refer further to
The determining module 218 then determines whether or not any of the detected network devices in the sample stream path are over-loaded or are performing poorly based on the performance statistics gathered by the monitoring module 210.
Referring now to
a shows a table 270 of MOS values with their associated qualities, and is illustrated merely to give an indication of the range of MOS values, while
An example embodiment is further described in use with reference to
Referring to
Although the call manager 122 may be used for measuring the quality of any real-time data stream, the example embodiment may find particular application in measuring the quality of sound or voice streams, for example, voice streams used for IP telephony. Thus, at block 354, the generating module 212 may generate a sample voice stream of known quality (e.g. having a MOS of 5), and may transmit the sample voice stream to a destination endpoint, for example switch 112. It will be noted that in the given example, because the call manager 122 is the source endpoint and the switch 112 is the destination endpoint, WAN edge routers 104 and 106 both lie in the path of the sample voice stream. Thus, the quality of the sample stream as received by switch 112 will be affected by the performance of WAN edge routers 104 to 106. In addition, the generating module 212 may transmit the sample voice stream to other destination endpoints, for example switch 114, to gauge the performance of WAN edge router 108. Thus, there may be a plurality of destination endpoints in the system 250 so that the sample voice streams pass through as many WAN edge routers as possible.
In another example embodiment, one of the WAN edge routers 104 to 106 may be the destination endpoint. Instead of, or in addition to, the call manager 122 may be a destination endpoint, and a router or switch may be a source endpoint. In such a case, the call manager 122 may include the measuring module 262, and the router or switch used as the source endpoint may include the generating module 212. Thus, there may be a plurality of source endpoints, and a single destination endpoint.
The measuring module 262 of switch 112 may measure, at block 356, the quality of the received sample voice stream in accordance with the MOS estimation algorithm, and transmit data indicitive of the measured voice quality back to the call manager 122. The comparing module 214 may thereafter compare, at block 358, the measured value of the sample voice stream against an expected quality value. For example, the comparing module 214 may determine what quality value is to be expected based on the network topology and/or all the codec used for transmitting the sample voice stream. The comparing module 214, using impairment factors, may thus determine an expected quality of the sample voice signal after transmission. For example, if the sample voice stream was transmitted using the G.711 codec, the expected MOS is 4.10 (refer to table 280 of
Using the traffic statistics gathered at block 352 by the monitoring module 210, the determining module 218 may then determine, at block 366, which of the detected WAN edge routers 104 to 106 in the sample stream path, if any, are contributing to the poor quality of service, and more specifically, which of these routers' interfaces are contributing to a poor quality of service. For example, if the traffic statistics show that an ATM interface of WAN edge router 104 had (and/or has) a very high memory or CPU usage (for example 80% to 100%) or was (and/or is) discarding an unusually high amount of non-error packets (e.g. one in 10 non-error packets were (and/or are) being discarded) it is likely or at least possible that the ATM interface of WAN edge router 104 is contributing to a poor quality of service.
It is to be understood that the order of some of the steps/operations described above may be changed and the same result may still be achieved. For example, the step of monitoring the routers, at block 352, may be performed later in the process, for example before or after the quality of the sample voice stream is measured at block 356, or before or after the WAN edge routers 104 to 108 are identified, at block 364.
The reporting module 226 may generate a report (e.g. in the form of a dashboard), at block 368, which summarizes the performance of each interface of each of the identified potentially faulty WAN edge routers 104 to 106 insofar as it relates to transmission quality of real-time data such as voice streams. The network administrator, after seeing the report, may be in a better position to correct the problem, for example by adjusting or bypassing the WAN edge router 104 to 108 which is causing the low quality of service.
The example computer system 400 includes a processor 402 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both), a main memory 404 and a static memory 406, which communicate with each other via a bus 408. The computer system 400 may further include a video display unit 410 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)). The computer system 400 also includes an alphanumeric input device 412 (e.g., a keyboard), a user interface (UI) navigation device 414 (e.g., a mouse), a disk drive unit 416, a signal generation device 418 (e.g., a speaker) and a network interface device 420.
The disk drive unit 416 includes a machine-readable medium 422 on which is stored one or more sets of instructions and data structures (e.g., software 424) embodying or utilized by any one or more of the methodologies or functions described herein. The software 424 may also reside, completely or at least partially, within the main memory 404 and/or within the processor 402 during execution thereof by the computer system 400, the main memory 404 and the processor 402 also constituting machine-readable media.
The software 424 may further be transmitted or received over a network 426 via the network interface device 420 utilizing any one of a number of well-known transfer protocols (e.g., HTTP).
While the machine-readable medium 422 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention, or that is capable of storing, encoding or carrying data structures utilized by or associated with such a set of instructions. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals.
Although an embodiment of the present invention has been described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the invention. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
The call manager 122 and/or switch 112, or any other computer system or network device in accordance with an example embodiment may be in the form of computer system 400.
The example methods, devices and systems described herein may be used for troubleshooting voice quality issues in a network environment. A network administrator may, based on the generated report, identify which network devices are contributing to a poor quality of service. The network administrator may therefore not need to check the performance of every network device in the network, but rather is provided with a shortlist of network devices which are potentially degrading voice quality.
This application is a continuation of U.S. patent application Ser. No. 11/466,390, filed on Aug. 22, 2006, which is incorporated herein by reference in its entirety for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
6269398 | Leong et al. | Jul 2001 | B1 |
6868068 | Jain et al. | Mar 2005 | B1 |
6970924 | Chu et al. | Nov 2005 | B1 |
7428300 | Drew et al. | Sep 2008 | B1 |
8630190 | Goyal et al. | Jan 2014 | B2 |
20030091033 | Van den Boeck et al. | May 2003 | A1 |
20040073641 | Minhazuddin et al. | Apr 2004 | A1 |
20050198266 | Cole et al. | Sep 2005 | A1 |
20050281204 | Karol et al. | Dec 2005 | A1 |
20060184670 | Beeson et al. | Aug 2006 | A1 |
20060250967 | Miller et al. | Nov 2006 | A1 |
20070097966 | Scoggins et al. | May 2007 | A1 |
20070101020 | Lin et al. | May 2007 | A1 |
20070168195 | Wilkin et al. | Jul 2007 | A1 |
20080049634 | Goyal et al. | Feb 2008 | A1 |
Entry |
---|
“U.S. Appl. No. 11/466,390, Advisory Action mailed Sep. 18, 2009”, 4 pgs. |
“U.S. Appl. No. 11/466,390, Applicant's Summary of Examiner Interview filed Jun. 14, 2010”, 1 pg. |
“U.S. Appl. No. 11/466,390, Final Office Action mailed Mar. 15, 2012”, 27 pgs. |
“U.S. Appl. No. 11/466,390, Final Office Action mailed Jun. 24, 2009”, 26 pgs. |
“U.S. Appl. No. 11/466,390, Final Office Action mailed Jul. 23, 2010”, 17 pgs. |
“U.S. Appl. No. 11/466,390, Non Final Office Action mailed Feb. 17, 2011”, 21 pgs. |
“U.S. Appl. No. 11/466,390, Non Final Office Action mailed Sep. 16, 2011”, 26 pgs. |
“U.S. Appl. No. 11/466,390, Non-Final Office Action mailed Jan. 4, 2010”, 19 pgs. |
“U.S. Appl. No. 11/466,390, Non-Final Office Action mailed Nov. 25, 2008”, 23 pgs. |
“U.S. Appl. No. 11/466,390, Notice of Allowance mailed Sep. 5, 2013”, 6 pgs. |
“U.S. Appl. No. 11/466,390, Response filed Jan. 1, 2012 to Sep. 16, 2011”, 21 pgs. |
“U.S. Appl. No. 11/466,390, Response filed Feb. 25, 2009 to Non Final Office Action mailed Nov. 25, 2008”, 14 pgs. |
“U.S. Appl. No. 11/466,390, Response filed May 3, 2010 to Non Final Office Action mailed Jan. 4, 2010”, 10 pgs. |
“U.S. Appl. No. 11/466,390, Response filed Jun. 15, 2011 to Non Final Office Action mailed Feb. 17, 2011”, 18 pgs. |
“U.S. Appl. No. 11/466,390, Response filed Jul. 16, 2012 to Final Office Action mailed Mar. 15, 2012”, 17 pgs. |
“U.S. Appl. No. 11/466,390, Response filed Aug. 24, 2009 to Final Office Action mailed Jun. 24, 2009”, 14 pgs. |
“U.S. Appl. No. 11/466,390, Response filed Nov. 30, 2010 to Final Office Action mailed Jul. 23, 2010”, 15 pgs. |
Number | Date | Country | |
---|---|---|---|
20140129708 A1 | May 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11466390 | Aug 2006 | US |
Child | 14153137 | US |