This disclosure relates generally to service performance monitoring in a virtualized communication network, in which at least one network function is virtualized.
In traditional telecommunications network operations, the conventional approach for the analysis or validation of quality of the media (or bearer) signal in a telephone call targeted for monitoring is to play out a known signal at one end of the call, record the signal at the other end, and assess the performance of the call by evaluating the quality and characteristics of the output signal, as compared to the known input signal, to determine metrics such as an objective quality score, delay, changes in the signal level or its spectral content, etc. These objective measures are checked against expected values to determine if the quality of the target call is as expected.
In the data communications context, there are also tools that “sniff” bearer packets on live networks and analyze Session Initiation Protocol (SIP), Real-Time Transport Protocol (RTP), and RTP Control Protocol (RTCP) performance to obtain statistics such as loss, jitter, delay, etc., at the points in the network where data is collected.
With the introduction of new media-related services, along with the ever-increasing complexity of the transport networks, there is increasing interest in validation and performance monitoring of these services.
Network Function Virtualization (NFV) is a relatively new domain of activity in the telecommunication industry, directing the evolution of telecommunication networks towards an architecture based on Cloud technologies, with the ultimate objective of accruing the same type of benefits realized in the Information Technology (IT) sector. However, the operation of telecom networks is bound by expectations and requirements that are much more demanding than those of IT. Furthermore, the IT industry's Cloud architecture had the opportunity to evolve and mature over time, whereas in the case of NFV, the pace of evolution is expected to be fast, involving the simultaneous deployment of many new components, new interfaces or open interfaces, new technologies, and a multi-vendor mix of equipment on an unprecedented scale. Experience shows that such a fast-paced integration along multiple dimensions, all of which are new, is likely to present nontrivial challenges in monitoring and managing performance, as well as in troubleshooting.
Therefore, performance monitoring of the media/service (or bearer) signal in the context of virtualization needs to be addressed.
In a first aspect of the present invention, there is provided a method, in a first bearer-processing node of a multi-node bearer path in a data session, for monitoring an overall performance related to the data session, wherein at least the first bearer-processing node in the multi-node bearer path is a virtualized network function, VNF. The method comprises: obtaining a first performance metric related to the data session, the first performance metric related to the data session including information related to an infrastructure supporting the virtualized network function; and sending the first performance metric including the information related to the infrastructure, over the multi-node bearer path, for use in determining the overall performance related to the data session.
In a second aspect, there is provided a network node adapted for use as a first bearer processing node in a multi-node bearer path for a data session, wherein at least the first bearer-processing node in the multi-node bearer path is a virtualized network function. The network node comprises: a network interface circuit configured for communication with one or more other network nodes in a communication network; and a processing circuitry, operationally connected to the network interface circuit, that configures the network node to: obtain a first performance metric related to the data session, the first performance metric including information related to an infrastructure supporting the virtualized network function; and send the first performance metric including the information related to the infrastructure, over the multi-node bearer path, for use in determining an overall performance related to the data session.
In a third aspect, there is provided a method in a data-collecting node operable to communicate with a first bearer-processing node in a multi-node bearer path for a data session, wherein at least the first bearer-processing node is a virtualized network function. The method comprises: sending to the first bearer-processing node, an instruction to report a first performance metric related to the data session for at least the first bearer-processing node; receiving the first performance metric for at least the first bearer-processing node, the first performance metric including information related to an infrastructure supporting the virtualized network function of the first bearer-processing node; and determining an overall performance related to the data session based on the received performance metric.
In a fourth aspect, there is provided a method, in a first bearer-processing node of a multi-node bearer path for a data session, wherein at least the first bearer-processing node is a virtualized network function. The method comprises: receiving information related to an infrastructure supporting the virtualized network function; and determining an overall performance related to the data session based on the received information related to the infrastructure.
In a fifth aspect, there is provided a method, in a first bearer-processing node of a multi-node bearer path for a data session, wherein at least the first bearer-processing node in the multimode path is a virtualized network function. The method comprises: obtaining a first performance metric related to the data session for the first bearer-processing node, the first performance metric including information related to an infrastructure supporting the first virtualized network node; receiving, from a second bearer-processing node, over the bearer path, a second performance metric related to the data session for at least the second bearer-processing node, wherein, if the second bearer-processing node is a second virtualized network function, the second performance metric for at least the second bearer-processing node includes information related to an infrastructure supporting the second virtualized network node; combining the first performance metric for the first bearer-processing node and the second performance metric for the second bearer-process node; and sending the combined performance metrics to a network node, for use in determining a performance related to the data session.
In a sixth aspect, there is provided, a network node operable to communicate with a first bearer-processing node in a multi-node bearer path for a data session, wherein the first bearer-processing node is a virtualized network function. The network node comprises: a network interface circuit configured for communication with at least a first bearer-processing node; a processing circuit configured to send to the first bearer-processing node, an instruction to report a first performance metric related to the data session for at least the first bearer-processing node; receive the first performance metric for at least the first bearer-processing node, the first performance metric including information related to an infrastructure supporting the virtualized network function of the first bearer-processing node; and determine an overall performance related to the data session based on the received performance metric.
In a seventh aspect, there is provided a network node in a multi-node bearer path for a data session, wherein at least the network node is a virtualized network function. The network node comprises: an interface circuit; and a processing circuit operationally connected to the interface circuit and configured to: receive information related to an infrastructure supporting the virtualized network function and to determine an overall performance related to the data session based on the received information related to the infrastructure.
In an eighth aspect, there is provided a network node adapted for use as a first bearer-processing node in a multi-node bearer path for a data session, wherein at least the first bearer-processing node in the multi-node bearer path is a virtualized network function. The network node comprises: an interface circuit; and a processing circuit operationally connected to the interface circuit and configured to: obtain a first performance metric related to the data session for the first bearer-processing node, the first performance metric including information related to an infrastructure supporting the first virtualized network node; receive, from a second bearer-processing node, over the bearer path, a second performance metric related to the data session for at least the second bearer-processing node, wherein, if the second bearer-processing node is a second virtualized network function, the second performance metric for at least the second bearer-processing node includes information related to an infrastructure supporting the second virtualized network node; combine the first performance metric for the first bearer-processing node and the second performance metric for the second bearer-process node; and send the combined performance metrics to a network node, for use in determining an overall performance related to the data session.
In a ninth aspect, there is provided a network node adapted for use as a first bearer-processing node in a multi-node bearer path for a data session, wherein at least the first bearer-processing node in the multi-node bearer path is a virtualized network function. The network node comprises: an obtaining module configured to obtain a first performance metric related to the data session, the first performance metric including information related to an infrastructure supporting the virtualized network function; and a sending module configured to send the first performance metric including the information related to the infrastructure, over the multi-node bearer path, for use in determining an overall performance related to the data session.
In a tenth aspect, there is provided a network node adapted for use as a first bearer-processing node in a multi-node bearer path for a data session, wherein at least the first bearer-processing node in the multi-node bearer path is a virtualized network function. The network node comprises: an obtaining module configured to obtain a first performance metric related to the data session for the first bearer-processing node, the first performance metric including information related to an infrastructure supporting the first virtualized network node; a receiving module configured to receive, from a second bearer-processing node, over the multi-node bearer path, a second performance metric related to the data session for at least the second bearer-processing node, wherein, if the second bearer-processing node is a second virtualized network function, the second performance metric for at least the second bearer-processing node includes information related to an infrastructure supporting the second virtualized network node; a combining module configured to combine the first performance metric for the first bearer-processing node and the second performance metric for the second bearer-process node; and a sending module configured to send the combined performance metrics to a network node, for use in determining an overall performance related to the data session.
In an eleventh aspect, there is provided a network node operable to communicate with a first bearer-processing node in a multi-node bearer path for a data session, wherein the first bearer-processing node is a virtualized network function. The network node comprises: a sending module configured to send to the first bearer-processing node, an instruction to report a first performance metric related to the data session for at least the first bearer-processing node; a receiving module configured to receive the first performance metric for at least the first bearer-processing node, the first performance metric including information related to an infrastructure supporting the virtualized network function of the first bearer-processing node; and a determining module for determining an overall performance related to the data session based on the received performance metric.
In a twelfth aspect, there is provided a computer program product comprising computer readable memory storing instructions thereon that, when executed by a network node, cause the network node to: obtain a first performance metric related to the data session, the first performance metric including information related to an infrastructure supporting the virtualized network function; and send the first performance metric including the information related to the infrastructure, over the multi-node bearer path, for use in determining an overall performance related to the data session.
In a thirteenth aspect, there is provided a computer program product comprising computer readable memory storing instructions thereon that, when executed by a network node, cause the network node to: receive information related to an infrastructure supporting a virtualized network function; and determine an overall performance related to the data session based on the received information related to the infrastructure.
Other aspects and features of the present invention will become apparent to those ordinarily skilled in the art upon review of the following description of specific embodiments of the invention in conjunction with the accompanying figures.
Embodiments of the present invention will now be described, by way of example only, with reference to the attached Figures, wherein:
The discussion below should be taken to be exemplary in nature, and not as limiting of the scope of the present invention. The scope of the present invention should not be considered as limited by the implementation details described below, which as one skilled in the art will appreciate, can be modified by replacing elements with equivalent functional elements.
The intent of NFV, which is the subject of intense activity in the telecom industry, is to replace tailor-made telecommunication equipment with virtualized software applications to be deployed to operate on standard high-volume servers, which appear as “virtual machines” to the applications. By design, virtualization technologies are intended to decouple the software implementation of a given application from the underlying hardware resources. Consequently, virtualized systems are not designed to allow visibility, by the applications, into aspects such as the identity, or the performance of the underlying infrastructure that could nonetheless affect their respective performance and effectiveness.
The decoupling of the software application from the underlying hardware infrastructure has worked well for the IT industry, where performance requirements and expectations have traditionally been less strict than what is expected of telecommunication networks. The high performance expectations of telecom networks, some legislated, make it desirable to seek mechanisms that enhance the capability to troubleshoot issues that may manifest themselves in the form of bearer performance problems.
As such, an underlying system to provide end-to-end visibility into the state and performance of the bearer signal flowing through the components forming a network service can make a vast difference; it can facilitate direct insight into performance issues that may adversely affect the quality of service delivered to a network user, which ultimately shapes the perception that the user has of the quality of the service.
A concept for providing such insight was described in a patent application with application number PCT/IB2014/067290, filed on Dec. 23, 2014, and entitled “MEDIA PERFORMANCE MONITORING AND ANALYSIS”. The entire content of that patent application is incorporated herein by reference. The teachings of PCT/IB2014/067290 could be extended and applied to virtualized network functions (VNF) in monitoring bearer performance in data sessions served by any mix of VNF and legacy equipment. The teachings proposed in this disclosure allow improvement of performance for a broad range of communication applications, and deep insight for the purpose of bearer performance monitoring and troubleshooting. In the context of virtualization, access by the virtualized applications to some of the pieces of data/information related to the infrastructure supporting the virtualized applications can be used to achieve performance improvement in communication nodes and in the virtualized applications.
Generally stated, embodiments of the present invention enable the virtualized applications to obtain insight into information, metrics, and aspects of the performance of infrastructure resources that can be used to improve troubleshooting, and even allow the applications to achieve an optimal performance related to a data session carrying services, multimedia data, for example. In other words, VNFs are able to access infrastructure related information of the underlying infrastructure, information that is normally only visible to the underlying infrastructure. To do so, new operations, information elements and attributes on some reference points or interfaces are defined. The information elements could be parameters that identify specific actions, determine a particular piece of information to be delivered, specify, related Intervals, etc.
The operations, information elements and attributes are to provide a view into the state of the infrastructure resources serving the virtualized applications. Thus, the decoupling of the virtualized application from the underlying infrastructure would remain unaffected.
A form of open interfaces for these operations, information elements and attributes can be implemented, subject to standardization, which can allow the free mix of applications and infrastructure from any combination of vendors, both in the virtualized applications and infrastructure spaces.
This framework 10 defines the interaction between different functional blocks, via reference points. The framework 10 comprises the following elements: Virtualized network functions (VNFs) 12, Element Management System (EMS) 14, the Network Function Virtualization Infrastructure layer designated as NFVI 16, Network Function Virtualization Management and Orchestration (NFV MANO) 18 and operation and business support system (OSS/BSS) 20. The NFV MANO 18 comprises virtualized infrastructure managers (VIM) 22, an orchestrator 24, VNF managers 26, and a VNF and infrastructure description 28.
As mentioned above, a VNF is a virtualization of a network function, in a legacy network. Examples of network functions for virtualization are 3GPP Evolved Packet Core network elements, e.g. Mobility Management Entity (MME), Serving Gateway (SGW), Packet Data Network Gateway (PDN); elements in a home network, e.g. Residential Gateway (RGW); and conventional network functions, e.g. Dynamic Host Configuration Protocol (DHP) servers, firewalls, etc. It should be noted that a VNF can be deployed over multiple virtual machines (VMs), where each VM hosts a single component of the VNF, or, the whole VNF can be deployed in a single VM as well. Furthermore, a container could be used, instead of a VM, to run a VNF.
The NFVI 16 is used to run the virtualized software implementations of telecommunication applications. The NVFI 16 includes hardware resources 30, and, virtualized resources and a virtualization layer 32.
The hardware resources 30 include computing 34, storage 36 and network 38 resources that provide processing, storage and connectivity to VNFs through the virtualization layer 32, such as a hypervisor.
The virtualization layer 32 is responsible for abstracting and logically partitioning physical resources, enabling the software that implements the VNF to use the underlying virtualized infrastructure and providing virtualized resources to the VNF so that the VNF can be executed. Also, the virtualization layer 32 ensures that VNFs are decoupled from hardware resources 30 and therefore, the software can be deployed on different physical hardware resources.
The reference point VI-Ha 40 interfaces the virtualization layer 32 with the hardware resources 30 to collect relevant hardware resources state information for managing the VNFs without being dependent on any hardware platform.
The VNFs are using the execution environment, provided by the NFVI 16 and represented by the reference point Vn-Nf 42, to ensure portability to different hardware types. Vn-Nf 42 does not assume any specific control protocol.
The other elements of the framework 10 are well-known in the art and will not described in this disclosure.
Examples of information related to the underlying infrastructure, used as performance metrics, or parameters, for each compute node may include:
Identification of the processor type on which the VNF is running, along with relevant data such as its clock frequency, cache characteristics, the number of processor cores made available to the VNF, etc.;
The total physical network bandwidth available for use by the compute resource; the percentage of total used central processing unit (CPU) can be an indication of the computational load borne by the compute resource;
The percentage of total used network bandwidth, as an indication of the level of Input/Output (I/O) load of the network hardware;
The total storage space available for use by a compute resource of the infrastructure;
The percentage of the total used storage space;
The number of virtual machines sharing the same compute resources;
The geographic location of the hardware infrastructure serving the VNF;
Elapsed time since the instantiation of the VNF on a particular processor/CPU; this is intended to make it possible for the application to detect the migration of the virtual machine it is running on, for example; and
The number of tenants that the infrastructure is serving.
It should be noted that such information is conventionally only visible within the NFVI 16, but they are rendered accessible and available to the VNF, according to embodiments of the present invention. Also, it is understood that a compute node refers to any parts of the infrastructure that provides processing capabilities.
The new operations, information elements and attributes defined for the interfaces allow the virtualized network function: 1) to query the underlying infrastructure for the type of information mentioned above; 2) to be notified automatically (or periodically) of changes, such as changes to the clock frequency, the CPU load, the number of tenants, the network load, as well as migration events, etc.
These operations, information elements and attributes are preferably defined at least for the reference points Vn-Nf 42, and Nf-Vi 44 (see
The described operations, information elements and attributes to access infrastructure related information will allow virtualized network functions, engaged in bearer processing nodes for providing a network service in a virtualized network to collaborate with peer nodes, virtualized or not, to provide end-to-end visibility into performance issues, as will be described hereinbelow. The usage of the NFV architecture framework's interfaces will preserve the desired level of abstraction of the underlying infrastructure while permitting seamless visibility into performance issues in multi-vendor deployments.
In addition to the benefit of performance monitoring and troubleshooting of a network service during a data session, certain virtualized network functions can extract additional performance benefits from the access to the performance metric information related to the infrastructure, as illustrated in the examples below.
One example of a VNF that can benefit from such information is the software implementation of a gateway that implements a jitter buffer and jitter-buffer management strategy. Jitter buffers are managed based on the accumulation of statistics of packet arrival over the duration of the service. The length of the jitter buffer is revised continually, to adapt to the changes in the packet arrival statistics. In the case of virtualized applications, migration of the corresponding virtual machine from one hardware resource to another can give rise to a false perception of a change in statistics, whereas the real cause is the arrival-phase discontinuity due to the change of the processor assigned to the VNF. Knowledge of the migration event allows the corresponding VNF to be aware of the real cause. The VNF can then deploy an appropriate strategy to maintain processing phase continuity rather than adapting to the (wrong) perception of the change in the packet arrival statistics. Alternatively, the VNF can reset its statistical model, and start constructing a fresh model. Furthermore, the visibility of migration (and its frequency) can be a valuable indicator to troubleshooting and performance monitoring, since a high frequency of migration is likely to give rise to the perception of poor quality of audio, video, or data performance.
Information related to the infrastructure rendered accessible as described above can be also used to optimize the performance in the case of virtualized network functions that control quality/computation complexity trade-offs. For example, high-compression algorithms (e.g. for voice, video) can achieve higher compression ratios by conducting more thorough optimization iterations. This, however, would come at the expense of higher consumption of the CPU resources. Knowledge of the level of the load on the underlying hardware can help such a VNF to select the best trade-off between CPU and network bandwidth utilization, in order to optimize compression performance without causing the overload of the underlying hardware.
Similar to the above example, knowledge of the network bandwidth load in the underlying hardware would allow a VNF to strike a more appropriate balance in terms of the required compression efficiency, in order to achieve a more global optimization.
Now turning to
In step 210, the network node receives information related to an infrastructure supporting the virtualized network function. This step could be performed in response to receiving an instruction to report one or more performance metrics related to the data session. In this case, responsive to receiving the instruction, the network node requests for the information related to the infrastructure. Optionally or alternatively, the network node can be configured to report such information on a regular basis, or based on a change of the requested information, for example. The information related to the infrastructure is received via the reference point Vn-Nf 42, for example. Examples of the information related to the infrastructure have been provided above.
In step 220, the network node determines an overall performance related to the data session based on the received information related to infrastructure. In other words, using the received information related to the infrastructure for the network node, the overall performance of the entire data session can be determined. Therefore, the quality of the whole data session can be evaluated and then troubleshooting and/or optimization or improvement can be applied to the data session.
For example, the network node can determine a performance optimization for the data session using the information related to the infrastructure. As mentioned before, the network node can determine the best trade-off between CPU and network bandwidth utilization, for example, in order to optimize the compression performance for voice and/or video signals. The network node can also determine a migration event and use it to optimize the performance of the audio or video quality, in the case of a jitter buffer, for example.
The network node 300 comprises an interface circuit 310, and a processing circuit 320.
The interface circuit 310 is configured to communicate with other network nodes and is operationally connected to the processing circuit 320.
The processing circuit 320 comprises a processor 330 and a memory 340, operationally connected to the processor 330. The memory 340 contains instructions, programs and codes that, when executed, cause the processor 330 to perform method 200.
In other words, the network node comprises a computer program product comprising a computer readable memory storing instructions thereon, that, when executed by a network node, cause the network node to perform method 200.
It should be noted that the network node 300 is assumed to be a virtualized network function. As such, the interface circuit 310 and the processing circuit 320 are provided by the underlying infrastructure, e.g. hardware resources 30 of
As shown above, the teachings of the present disclosure can enhance the quality of service in a data session, in the context of virtualization or in a virtualized network. The term ‘service’ can comprise any applications, such as voice calls, multimedia data communications, etc. As such, the data session may carry voice calls and multimedia data or any other kinds of data.
More specifically, in the following, methods and network nodes used for monitoring end-to-end performance of a data session in a communication network comprising a plurality of nodes, at least one of which is a virtualized network function, will be described.
The dashed lines show the direction 408 of relaying accumulated performance metric data from the LTE device 204 (on the left) towards the DCA 402 (on the right), and the direction 410 of relaying and distributing DCA 402 control messages. Note that the direction 410 is in the opposite direction of direction 408. Performance metric data thus flow from the LTE device 404 towards the data collection agent 402, in a manner where each bearer-processing node 406 in the path 400 receives performance data from its upstream node, and relays it to its downstream bearer-processing node after combining its own performance data and metrics. Note that as used here, “upstream” refers to the direction in which instructions flow from the DCA 402 to the bearer-processing nodes 406, while “downstream” refers to the direction in which performance data is relayed towards the DCA 402 from the bearer-processing nodes 406. Also, it should be noted that at least one of the bearer-processing nodes 406 in the path is a virtualized network function.
In some embodiments, the process of collecting or obtaining performance metrics takes place continually, e.g., at regular intervals, allowing the data collection agent 402 to receive and process the combined information package, to provide a glimpse into the state of the operation of the bearer-processing nodes in the data session for the time interval corresponding to the latest set of data, either in real time, or offline. The DCA 402 can send control messages to all nodes 406 that support the in-band protocol, as shown in
Two types of protocols are employed to implement the accumulation and relaying of performance metrics illustrated in
New protocols or extensions of existing protocols may be used to establish traffic of performance data and control messages. Although it is possible to define a completely new protocol, an exemplary approach (described here) is to extend the SIP protocols already defined to set up Real Time Control Protocol (RTCP) Extended Reports (RTCP XR), and to extend them to provide for the end-to-end negotiation of the protocol for collection of performance data and distribution of the control instructions.
One possible implementation is to place specifications in the body of various SIP messages, in a similar manner as is used for Session Description Protocol (SDP) payloads. The new extensions proposed in this disclosure can be used, in various embodiments, to achieve the following:
Similarly, new protocols or extensions of existing protocols may be used to provide for collection or obtention of performance data and for distribution of control instructions. Below, a protocol is described for transfer of performance metrics from bearer-processing nodes 406 to the data collection agent 402 and for the transfer of control information from the data collection agent 402 to the bearer-processing nodes 406. While it is possible to define a completely new protocol for this purpose, the preferred method would be to extend the capabilities of RTCP XR, especially since RTCP XR was already designed to accommodate extensions based on future needs.
In order to relay accumulated performance metric, the requirement for each bearer-processing node 406 is to build up the encapsulation for performance data by adding its own performance metrics, measured over the latest time interval or for the most recent event, to the encapsulation received from the upstream node. This cumulative package is then relayed to the next bearer-processing node downstream (as determined by the direction and the termination from which the SIP negotiation for data collection originated), such that the final set of data records arriving at the data collection agent 402 contains the full set of performance information pertaining to all participating bearer-processing nodes 406 in the session, for a given time interval.
Note that since RTCP (including RTCP XR) is defined only for peer-to-peer exchange of information, an important part of the extension addressed here is to define how the accumulation of performance information is to take place. Furthermore, since the voice/video/data session topology, not known a priori by the data collection agent 402, could be complex, e.g. in the case of teleconference scenarios, and dynamic, e.g. due to handover or changes in the number of conference attendees, it is necessary to define a mechanism that can allow the data collection agent 402 to construct the topology model for the session, and to correctly associate the collected or obtained data to their respective nodes 406 and their terminations/ports.
One example of how to achieve this is through the topology coding techniques described below. A systematic approach is required to label the nodes 406 involved in a voice/video/data session, as well as the connections linking them. This is needed in order for the DCA 402 to be able to construct a model of the call/service topology, to correctly associate performance metrics with the appropriate nodes and links, and to facilitate the transmission of control messages to the desired nodes.
A number of factors need to be considered in defining an appropriate method for encoding the topology of the session. First, bearer-processing nodes 406 can enter and leave the topology dynamically, as the session topology changes due to call processing events, handovers, etc. Second, the knowledge of the session topology has to be built up incrementally, and updated over time, with each node contributing its own knowledge and information in such a manner that its topological relationship with upstream nodes are manifested and decoded uniquely. Even for a session with a stable topology, the data collection agent 402 is likely to see changes in the topology information as updated performance and topology data arrive from more distant nodes in the bearer path 400 asynchronously. Similarly, a given node 406 in the bearer path 400 is likely to receive topology information from the upstream nodes that may vary over time. Further, since each in-path node has a role in building up the topology information, it is necessary for a given node to present topology information to its downstream node in a way that the labels applied to the same upstream nodes and terminations remain constant over time. The continuity is required because each node has to be able to keep track of upstream nodes from one data collection interval/epoch to the next. However, as long as the node/termination labels passed on to the downstream node retain their correspondence to the applicable (upstream) node and termination, a node does not necessarily have to use the same labels that it received from its upstream node when transmitting the built-up topology data downstream.
A bearer-processing node 406 is connected to one or more peer nodes, and exchanges bearer data with them, via terminations. Each node 406 in the session is connected to at least one other node 406. The simplest topology is a case with a single bearer-processing node 406 connected to the data collection agent 402. For the purpose of developing a systematic formulation to describe a topology, the specific pieces of information required to define a given node are:
To facilitate the identification of the terminations of a node, a numbering scheme can be applied as shown in
It should be noted that for as long as a termination exists, the number assigned to it remains the same. Furthermore, if a termination is removed, its number should remain reserved and will not be assigned to any other termination. This is necessary to avoid confusion and to allow stability of the termination labels for downstream bearer-processing nodes, and ultimately, for the data collection agent 402. Topology information and performance metrics computed for each termination in a node 406, or arriving from other nodes, are accumulated and transmitted towards the DCA 402 through the primary termination of each node.
Since, as stated above, the topology information has to be built up as the information passes through the in-path bearer-processing nodes 406 towards the data collection agent 402, it is useful to define a descriptor block for a single node. An example descriptor block is shown in
In selecting a node ID, a node should attempt to base the selection on a mechanism that minimizes the probability of overlap with IDs from other nodes. This could be achieved, for example, through the use of a random component in the Node ID.
Descriptor blocks such as those described above are transmitted by each participating bearer-processing node 406, along with the performance data and metrics, to the next downstream peer node. The downstream node generates its own node descriptor block and stacks it on top of the blocks it has received from the upstream node(s). In doing so, the node scans the received node descriptor blocks to determine whether its Node ID happens to have been used in the received data. If so, the node in some embodiments may continue to use its current Node ID, but will revise the overlapping Node ID(s) in the received data blocks to a new (unused) ID. Once a node defines a new ID to replace the overlapping Node ID that it found in a received data block, the same ID will be used henceforth to replace the overlapping ID in future epochs; the node will have to maintain a translation table for this purpose, as long as the overlap exists.
Performance metrics can be encapsulated a number of ways. One way is for each node to simply concatenate its own performance metrics with the performance metrics it receives from an upstream peer node or nodes, using the node descriptor block as a delimiter. Other manners of encapsulation are possible, of course. The key to any approach is that it allows the DCA 402 to decode the information and attribute it correctly to the different segments of the topology. The encapsulation technique may also provide additional data that may provide useful insight.
It should be noted that in addition to performance metrics, other useful information can also be sent by each node and relayed to the DCA 402. One example of such data is a catalog of the metrics that can be produced and transmitted by a node and/or a node termination.
The immediately preceding discussion was focused on the collection or obtention of performance metrics and the relaying of the performance metric data towards the data collection agent 402. The DCA 402 can also send instructions or requests to control the information that it requires from each and every node in the session topology. For example, it can request a catalog of available metrics from all (or specific) nodes, or issue messages to specify the particular type of data it would like to receive from a given node, or a given node termination. When appropriate, a message sent from the DCA 402 could also carry security-related information such as passwords or encryption parameters and/or keys. Individual nodes and their terminations are addressed using the node and termination IDs that are embedded in the topology information. The topology information which, as described above, is composed of the descriptor blocks, delivers to the data collection agent 402 a description of the topology along with unique labels for each of the nodes and terminations. The collection agent 402 can then address a request and/or instructions to particular nodes and/or node terminations in the topology using those same labels, and send them towards the destination through all of the in-path bearer-processing nodes 406. Each in-path bearer-processing node 406 removes the message layer addressed to itself, in some embodiments, and transmit the remainder towards the node(s) from which it receives performance data. The portion of the message addressed to nodes that are no longer connected to an in-path node are dropped. As discussed in further detail below, a bearer-processing node that has provided an alternate node ID for an upstream node should substitute the upstream node's actual/original node ID in such messages before passing the messages upstream.
As noted above, the IDs/labels seen by the data collection agent 402 may have been altered by an in-path node, in order to ensure uniqueness of the labels. The DCA 402 is generally unaware of this alteration. Accordingly, DCA 402 messages/requests that start out with an “altered” node ID must be translated back to the original label once the request arrives at the in-path node that altered the ID, using the translation table that was generated to keep track of node ID translations.
As described above, the nodes 406 in the data path of a given session combine their own performance metrics with those received from upstream nodes, and transmit the resulting data on their primary termination towards the DCA 402. In specific cases where an upstream node may not support the set of protocols defined here, it is still possible for a compliant node to relay performance metrics of a non-compliant peering node. For example, a Media Gateway compliant with the protocols defined here, which may be peering with a LTE terminal/device that does not support the protocols, could still bundle up and relay performance metrics that the LTE terminal/device reports using the standardized RTCP protocol.
The Data Collection Agent (DCA) 402 collects the performance metrics relayed from upstream bearer-processing nodes, and provides the user interface to the bearer-performance monitoring system. The task of monitoring may include some or all of the following aspects, in various embodiments:
In some embodiments, the DCA 402 also allows the control of the analysis/monitoring session as described below:
The DCA 402 implementation can also support a web interface from which the collected data can be displayed in a user-friendly and device-independent manner. This web interface may consist of a web server application and an HTML web client interface, in some embodiments. The web server application accepts parameters from the web client interface specifying, for example, user access credentials, the data that should be presented, the manner that the data is to be displayed, and the time period of interest. In the case of off-line analysis, the data is then retrieved from the DCA 402 data store, analyzed, formatted, and sent to the web client interface for display. The web server application may be implemented with any number of technologies, such as a Java servlet, and executes on a web server accessible to authorized users. The web client interface may be an HTML web page that any compliant web browser may access from a web server. It presents an access point to an authorized user, accepts specifications from the user, sends the appropriate parameters to the web server application, receives data from the web server application, and displays the data in a meaningful manner.
The design of the display format for the DCA should provide for the presentation of captured information in a comprehensive, yet intuitive manner and in a compact form, to facilitate quick understanding by the viewer. An example of such a format is shown in
In various embodiments, for each link connecting the nodes, any or all of the following information is displayed:
For each node, selected statistics are plotted for each port of data entry, such that:
The display may start with a default set of performance metric plots or a set negotiated through SIP negotiation and SDP exchange, if such metrics are provided by the nodes in the call path. However, the displayed metrics are intended to be easily switched as necessary, through means such as drop-down menus in the area of the plots, or on the appropriate side of the displayed node.
In some embodiments, the DCA 402 may also provide for a view in which the nodes in the data session topology are illustrated with respect to their geographical positions. Information about the geographic locations of the nodes may be determined from location information sent by each node along with performance metrics data, and/or from a pre-stored lookup table indexed by node identifiers, for example.
In view of the detailed description above of the various techniques for accumulating and evaluating performance metrics in a data session carried out through a multi-node bearer path, it will be appreciated that
More specifically,
Method 1400 begins, at block 1420, with obtaining a first performance metric related to the data session, the first performance metric including information related to an infrastructure supporting the virtualized network function. Examples of information related to the infrastructure have been provided above. In some embodiments, obtaining includes collecting, determining, calculating, requesting and receiving, etc., the first performance metric.
The obtaining of the first performance metric can optionally be triggered responsive to receiving, from another node in the multi-node bearer path, an instruction to report the first performance metric related to the data session for at least the first bearer-processing node, as shown at block 1410. For example, the DCA 402 can send the instruction to report the first performance metric, on a regular basis, i.e. for each time interval, or based on events. Also, the instruction to report can be sent from the DCA 402 to the first bearer-processing node, in a first direction, e.g. upstream direction. Optionally or alternatively, in some embodiments, the first bearer-processing node may be pre-configured, e.g., with default settings or with settings negotiated during call set-up (e.g., via SIP), to obtain and send the performance metrics, or to obtain, and send the performance metrics under certain circumstances, without the need of receiving an instruction to report.
As shown at block 1430, the first bearer-processing node sends the first performance metric including the information related to the infrastructure, over the multi-node bearer path, for use in determining the overall performance related to the data session. For example, the obtained (or first) performance metric is sent to the DCA 402 from the first bearer-processing node, in a second direction, e.g. downstream direction. Also, the first bearer-processing node obtains the first performance metric for each of one or more time intervals or for each one of more events. It should be noted that the first bearer-processing node can obtain a plurality (or one or more) of first performance metrics.
In some embodiments, once the first bearer-processing node receives the instruction to report, it can send/forward an instruction to report a second performance metric related to the data session to a second bearer-processing node, over the multi-node bearer path. The instruction may also indicate to send the collected or obtained (or second) performance metric back to the first bearer-processing node.
In response to receiving the instruction to report, the second bearer-processing node obtains the second performance metric and if the second bearer-processing node is a virtualized network function, the second performance metric includes information related to an infrastructure supporting the virtualized network function. The obtained (or second) performance metric from the second bearer-processing node is sent to the first bearer-processing node over the multi-node bearer path.
After receiving the second performance metric from the second bearer-processing node, the first bearer-processing node adds its own performance metric(s) and then relays both (first and second) performance metrics over the multi-node bearer path to the DAC 402, for example. The first performance metric for the first processing node and the second performance metric for the second processing node are distinguished from one another by using node identifier labels corresponding to the first and second bearer bearer-processing nodes, respectively.
In some other embodiments, method 1400 can also comprise the step of determining that the second bearer-processing node is a virtualized network function. To do so, the second bearer-processing node may have an application mode, which indicates the mode in which the application is running, either in a virtualized environment or in a legacy environment. It will be appreciated by a skilled person in the art that other methods can be used to determine that a bearer-processing node is a virtualized network function.
Upon determining that the second bearer-processing node is a virtualized network function, the VNF sends, to the NFVI 16, a request for the information related to the infrastructure, via the reference point Vn-Nf 42. The VNF then receives the information via the same reference point. Also, the first bearer-processing node (or VNF) could be configured to periodically access the information related to the infrastructure supporting the virtualized network function, so that it does not need receive any requests.
A method 1500, as illustrated in
More specifically, method 1500 may comprise steps of:
as illustrated at block 1510, obtaining a first performance metric related to the data session for the first bearer-processing node, the first performance metric including information related to an infrastructure supporting the first virtualized network node;
as illustrated at block 1520, receiving, from a second bearer-processing node, over the multi-node bearer path, a second performance metric related to the data session for at least the second bearer-processing node, wherein, if the second bearer processing node is a second virtualized network function, the second performance metric includes information related to an infrastructure supporting the second virtualized network node;
as illustrated at block 1530, combining the first performance metric for the first bearer-processing node and the second performance metric for the second bearer-process node; and
as illustrated at block 1540, sending the combined performance metrics to a network node, for use in determining an overall performance related to the data session.
The same range and types for the performance metrics and the information related to the infrastructure discussed above are applicable to this example, as are the techniques discussed above for labeling performance metrics for the first and second bearer-processing nodes.
The obtaining of the first performance metric for the first bearer-processing node is done for each of one or more intervals or events during the data session, as well as the receipt of the performance metrics from the second bearer-processing node. Also, the obtaining of the first performance metric is responsive to receiving an instruction to report the first performance metric. The instruction is received in a first direction, e.g. direction 410 in
Method 1600 starts with, as shown at block 1610, sending to the first bearer-processing node an instruction to report a first performance metric related to the data session for at least the first bearer-processing node, which is a virtualized network function. For example, the instruction may indicate to report the first performance metric for each of one or more intervals or events during the data session.
As shown at block 1620, the data collecting node receives the first performance metric related to the data session, the first performance metric including information related to an infrastructure supporting the virtualized network function of the first bearer-processing node.
As shown at block 1630, the data collecting node determines an overall performance related to the data session based on the received first performance metric.
In some embodiments, method 1600 further comprises receiving, from the first bearer-processing node, a second performance metric related to the data session for a second bearer-processing node, for each one of one or more intervals or events during the data session, for example. If the second bearer-processing node is a virtualized network function, the second performance metric related to the data session for the second bearer-processing node includes information related to the infrastructure supporting the virtualized network function of the second bearer-processing node.
In some of these embodiments, the first performance metric for the first bearer-processing node and the second performance metric for the second bearer-processing node are received together, for each of the one or more intervals or events, and are distinguished from one another by node identifier labels corresponding to the first and second bearer-processing nodes, respectively. In some embodiments, method 1600 further comprises determining a bearer-path topology for the data session, based on node identifier labels and termination labels, included with the received performance metrics, for at least the first and second bearer-processing nodes. The method 1600 may further comprise generating a representation of the determined bearer-path topology for display, the representation including depictions of each bearer-processing node in the determined bearer-path topology and representations of one or more of the performance metrics for at least the first and second bearer-processing nodes. In some other embodiments, the method 1600 includes updating the representation of the determined bearer-path topology, in response to receiving updated topology information and/or performance metrics for one or more bearer-processing nodes in the bearer path. The method 1600 further includes determining a geographic location for each of two or more bearer-processing nodes in the determined bearer-path topology, wherein generating the representation of the determined bearer-path topology for display comprises overlaying the depictions of each bearer-processing node in the determined bearer-path topology, together with how they are interconnected, on a depiction of a map, based on the determined geographic locations.
A computer program for controlling the node 1700 to carry out a method embodying any of methods 1400, 1500 and 1600 is stored in a program storage 1730, which comprises one or several memory devices. Data used during the performance of any of the methods 1400, 1500 and 1600 is stored in a data storage 1720, which also comprises one or more memory devices, one or more of which may be the same as those used for program storage 130, in some embodiments. During performance of any of the methods 1400, 1500 and 1600, program steps are fetched from the program storage 1730 and executed by a Central Processing Unit (CPU) 1710, retrieving data as required from the data storage 1720. Output information resulting from performance of any of these methods can be stored back in the data storage 1720, or sent to an Input/Output (I/O) interface 1740, which includes a network interface circuit for sending and receiving data to and from other network nodes. The I/O 1740 may also include a radio transceiver for communicating with one or more terminals, in some embodiments. The CPU 1710 and its associated data storage 1720 and program storage 1730 may collectively be referred to as a processing circuit 1750. It will be appreciated that variations of this processing circuit 1750 are possible, including circuits, one or more of various types of programmable circuit elements, e.g., microprocessors, microcontrollers, digital signal processors, field-programmable application-specific integrated circuits, and the like.
Accordingly, in various embodiments of the invention, processing circuits, such as the CPU 1710, data storage 1720, and program storage 1730 in
It should be appreciated that the processing circuit 1750, when configured with appropriate program code, may be understood to comprise several functional “modules,” where each module comprises program code for carrying out the corresponding function, when executed by an appropriate processor.
Thus, for example,
The optional receiving module 1810 is configured to receive an instruction to report a first performance metric related to the data session for at least a first bearer-processing node.
The obtaining module 1820 is configured to obtain the first performance metrics, the first performance metric including information related to an infrastructure supporting the virtualized network function.
The sending module 1830 is configured to send the first performance metric that include the information related to the infrastructure over the multi-node bearer path, for use in determining a performance related to the data session.
Similarly, a network node 1900 configured to carry out the method 1500 of
The obtaining module 1910 is configured to obtain a first performance metric related to the data session for the first bearer-processing node, the first performance metric including information related to an infrastructure supporting the first virtualized network node.
The receiving module 1920 is configured to receive, from a second bearer-processing node, over the multi-node bearer path, a second performance metric related to the data session for at least the second bearer-processing node, wherein, if the second bearer processing node is a second virtualized network function, the second performance metric includes information related to an infrastructure supporting the second virtualized network node.
The combining module 1930 is configured to combine the first performance metric for the first bearer-processing node and the second performance metric for the second bearer-process node.
The sending module 1940 is configured to send the combined performance metrics to a network node, for use in determining an overall performance related to the data session.
The sending module 2010 is configured to send to the first bearer-processing node an instruction to report a first performance metric related to the data session for at least the first bearer-processing node.
The receiving module 2020 is configured to receive the first performance metrics related to the data session, the first performance metric including information related to an infrastructure supporting the virtualized network function of the first bearer-processing node.
The determining module 2030 is configured to monitor an overall performance related to the data session, based on the received first performance metric.
In the present description of various embodiments of present inventive concepts, it is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of present inventive concepts. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which present inventive concepts belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense expressly so defined herein.
As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Well-known functions or constructions may not be described in detail for brevity and/or clarity. The term “and/or” includes any and all combinations of one or more of the associated listed items.
It will be understood that although the terms first, second, third, etc. may be used herein to describe various elements/operations, these elements/operations should not be limited by these terms. These terms are only used to distinguish one element/operation from another element/operation. Thus a first element/operation in some embodiments could be termed a second element/operation in other embodiments without departing from the teachings of present inventive concepts. The same reference numerals or the same reference designators denote the same or similar elements throughout the specification.
As used herein, the terms “comprise”, “comprising”, “comprises”, “include”, “including”, “includes”, “have”, “has”, “having”, or variants thereof are open-ended, and include one or more stated features, integers, elements, steps, components or functions but does not preclude the presence or addition of one or more other features, integers, elements, steps, components, functions or groups thereof. Furthermore, as used herein, the common abbreviation “e.g.”, which derives from the Latin phrase “exempli gratia,” may be used to introduce or specify a general example or examples of a previously mentioned item, and is not intended to be limiting of such item. The common abbreviation “i.e.”, which derives from the Latin phrase “id est,” may be used to specify a particular item from a more general recitation.
Example embodiments have been described herein, with reference to block diagrams and/or flowchart illustrations of computer-implemented methods, apparatus (systems and/or devices) and/or computer program products. It is understood that a block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions that are performed by one or more computer circuits. These computer program instructions may be provided to a processor circuit of a general purpose computer circuit, special purpose computer circuit, and/or other programmable data processing circuit to produce a machine, such that the instructions, which execute via the processor of the computer and/or other programmable data processing apparatus, transform and control transistors, values stored in memory locations, and other hardware components within such circuitry to implement the functions/acts specified in the block diagrams and/or flowchart block or blocks, and thereby create means (functionality) and/or structure for implementing the functions/acts specified in the block diagrams and/or flowchart block(s).
These computer program instructions may also be stored in a tangible computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instructions which implement the functions/acts specified in the block diagrams and/or flowchart block or blocks. Accordingly, embodiments of present inventive concepts may be embodied in hardware and/or in software (including firmware, resident software, micro-code, etc.) running on a processor such as a digital signal processor, which may collectively be referred to as “circuitry,” “a module” or variants thereof.
It should also be noted that in some alternate implementations, the functions/acts noted in the blocks may occur out of the order noted in the flowcharts. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Moreover, the functionality of a given block of the flowcharts and/or block diagrams may be separated into multiple blocks and/or the functionality of two or more blocks of the flowcharts and/or block diagrams may be at least partially integrated. Finally, other blocks may be added/inserted between the blocks that are illustrated, and/or blocks/operations may be omitted without departing from the scope of inventive concepts. Moreover, although some of the diagrams include arrows on communication paths to show a primary direction of communication, it is to be understood that communication may occur in the opposite direction to the depicted arrows.
Many variations and modifications can be made to the embodiments without substantially departing from the principles of the present inventive concepts. All such variations and modifications are intended to be included herein within the scope of present inventive concepts. Accordingly, the above disclosed subject matter is to be considered illustrative, and not restrictive, and the appended examples of embodiments are intended to cover all such modifications, enhancements, and other embodiments, which fall within the spirit and scope of present inventive concepts. Thus, to the maximum extent allowed by law, the scope of present inventive concepts are to be determined by the broadest permissible interpretation of the present disclosure, and shall not be restricted or limited by the foregoing detailed description.
Number | Date | Country | Kind |
---|---|---|---|
PCT/IB2014/067290 | Dec 2014 | IB | international |
This patent application claims priority based upon the prior PCT patent application entitled “MEDIA PERFORMANCE MONITORING AND ANALYSIS”, application number PCT/IB2014/067290, filed Dec. 23, 2014, by inventors Jimson Mah and Rafi Rabipour.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2015/058371 | 10/29/2015 | WO | 00 |