The present invention has its application within the telecommunication sector, especially, relates to Quality Assurance in computer networks and, more particularly, refers to a system and method for preventing, detecting and restoring Quality of Service (QoS) degradations in End-to-End (E2E) MultiProtocol Label Switching (MPLS) networks.
Internet Protocol (IP) Core networks are usually deployed over Multiprotocol Label Switching (MPLS) technology due to the wide range of benefits that this encapsulation provides, in terms of Traffic Engineering (TE), homogeneous provision of any type of service, restoration tools and Quality of Service (QoS) maintenance. For those reasons, in the past recent years MPLS has also experienced an extension to other existing segments, like regional networks. However, these MPLS domains have normally been kept separated, at least in large operators, mainly due to scalability causes.
Very recently, an effort has been done to define the so called End-to-End (E2E) MPLS networks, which means scalable MPLS transport over any type of access and layer-1 technologies, at any network segment (and between any network segment) and for any type of service. In other words, all network routers (distribution routers, edge routers and metro routers) provide a unique MPLS-based transport layer for any service established among Access nodes and Service nodes. The main advantages that E2E MPLS networks offer are the flexibility and homogeneity while providing services over Pseudo-Wires (PWs) in (and across) all network segments, the simplification of the network management and the existence of E2E Operation, Administration and Maintenance (OAM) defined mechanisms for faults management. The most relevant example of E2E MPLS architectures is Seamless MPLS. However, one of the most important issues that E2E MPLS networks present is precisely related with fault management.
The total fault time (i.e. the time during which the service is unavailable) is composed of three time intervals. The process comprises the following three steps: after a fault happens, first step (i) is to detect that it has happened, then (ii) to locate where it has happened, and finally (iii) to restore it. A goal of any fault management system should be to reduce to the maximum possible the total fault time.
For that purpose, automated processes are a requirement; if any of the three prior steps, (i) detection, (ii) location or (iii) restoration, needs of human intervention, then the response time is increased, and the service is unavailable during a longer time. MPLS enables several automated restoration mechanisms, although it is worth mentioning that they are not fast on all occasions.
Besides, these are not the only challenges: current fault management processes (and restoration mechanisms) deal mainly with Loss of Connectivity (LoC) failures, but there exist other impairments which also affect QoS, like network congestion for example. Thus, proper fault management also needs to address such degradation causes.
Current MPLS monitoring tools and restoration solutions are briefly described below:
Besides the tools already described above, there is still an additional functionality that plays an important role once a failure is detected and located: MPLS restoration. Restoration mechanisms need to be triggered to restore the client traffic flows, i.e., to inject them over an alternative path which does not present any fault. In MPLS there exist several procedures to achieve such behavior, being Fast-Reroute (FRR) the most common one. Moreover, for whatever reason (e.g. when congestion is detected), network operators may want to move the traffic load from one network segment to another. Such operation needs to be executed without any loss of the client traffic, what is known as “make before break” approach. In MPLS it is possible to make traffic engineering (TE) using RSVP-TE, an extension of the Resource Reservation Protocol (RSVP). Both restoration and traffic engineering processes can be determined as revertible or non-revertible. This means that it is possible to determine whether the traffic must revert into the original path or not once the failure has been repaired.
Also, in order to make delay- and loss-based traffic engineering (TE), there are several proposals from IETF (in a pre-standard phase), which include the possibility to monitor the network conditions prior to the establishment of any connectivity service, e.g., using the network status as input for the determination of the best path. This feature is essentially different from the ones presented above, which are focused on monitoring of currently set up services.
Another example which is worth mentioning is the method and apparatus for Network management support of OAM functionality disclosed in EP1176759, which describes a network management system with a graphical user interface (GUI) comprising several features to facilitate human operators' work, i.e., to facilitate the configuration and results gathering of OAM-based monitoring. Therefore, human intervention is still required. The only automated processes there described are (i) the OAM functionalities configuration along the nodes forming the paths (primary and backup), and (ii) the gathering of the OAM tests results and their presentation to the operator via the GUI. It is still the human operator who determines which tests need to be carried out upon an alarm reception. Moreover, the method described in EP1176759 does not include prevention features for QoS degradation.
Previously presented state-of-art solutions represent different approaches to carry out monitoring and performance measurement in real networks. Nevertheless, working as isolated features they are neither adapted nor solve all the presented problems, especially in terms of bandwidth consumption and automated operation in E2E MPLS networks. Some deficiencies of existing solutions are described below:
Summarizing, there is no single tool that permits scalable fast restoration (and thus low traffic losses, and thus high service availability) for every type of Quality of Service (QoS) degradation that may happen in large Multiprotocol Label Switching (MPLS) networks. In addition, automation does not exist for monitoring systems to date, needing of human intervention to detect, correlate and locate QoS degradations, which again increases the total required time for restoration. Existing automated solutions present either high failure location times or a high monitoring load, meaning that the associated consumed bandwidth is very high, preventing operators from using this bandwidth to offer additional connectivity services. Therefore, there is a need in the state of the art for a system to prevent, detect and restore QoS degradations based on monitoring systems which make coordinated use of several of such existing tools without human intervention and with fast response time.
The present invention solves the aforementioned problems and overcomes previously explained state-of-art work limitations by disclosing a method and system that makes use of currently available monitoring mechanisms for QoS degradation detection, in a coordinated and automated fashion so that the monitoring load can be reduced. This is done by performing a centralized coordination of the monitoring mechanisms, which permits detecting potential critical situations by means of lightweight (i.e. low bandwidth consuming) tools, and then confirming or invalidating the degradation carrying out heavier measurements only at those segments where they need to be done. Therefore, the present invention provides a method and system for the automatic prevention, detection and restoration of QoS degradations, while minimizing the monitoring bandwidth consumed for this purpose: the invention makes use of low bandwidth consuming tools first, and confirms that degradations occur with heavier tools focused on specific segments, where an increment of bandwidth does not impact the whole network behaviour. The determination of critical segments also permits a faster restoration, which positively affects service availability.
Since in the prior-art there is not a single monitoring tool that is adequate to overcome all sort of degradations that can occur in current networks, the present invention makes use of the most powerful monitoring systems available in the market, coordinating them to increase the speed at which services are recovered from faults and to reduce the number of monitoring packets injected in the network. Moreover, the procedures defined for the invention are automated, which once again increases service availability as human intervention is avoided.
According to a first aspect of the present invention, a method for restoring QoS degradations in MPLS networks is disclosed and comprises the following steps:
A second aspect of the present invention is referred to a system for determining QoS degradations in MPLS networks, which comprises:
The active monitoring trigger module not only triggers tests of restored services, but also, in a preferred embodiment of the invention, it triggers the heaviest tests (active tests) that confirm degradations. Thus, the active monitoring trigger can be requested by any of the computation modules for actively obtaining tests, for degradation confirmation purposes or over the restored services.
Locating means of the faulted segments are in a network node of the MPLS network, from which a computation module of the system defined above receives the alarm and so requests the location of this network segment. In the case of alarms received from Application Layer, locating means are provided by the system database, external or internal to the system, from which the location of the network segment used by the Application Layer is requested by a computation module of the system described above.
In a final aspect of the present invention, a computer program is disclosed, comprising computer program code means adapted to perform the steps of the described method when said program is run on a computer, a digital signal processor, a field-programmable gate array, an application-specific integrated circuit, a micro-processor, a micro-controller, or any combination of the aforementioned ones and/or another form of programmable hardware.
The method and system in accordance with the above described aspects of the invention have a number of advantages with respect to prior art, focused and oriented to increase the performance of E2E MPLS networks while providing services over the Label Switched Paths (LSPs). These advantages of the invention can be summarized as follows:
These and other advantages will be apparent in the light of the detailed description of the invention.
For the purpose of aiding the understanding of the characteristics of the invention, according to a preferred practical embodiment thereof and in order to complement this description, the following figures are attached as an integral part thereof, having an illustrative and non-limiting character:
The matters defined in this detailed description are provided to assist in a comprehensive understanding of the invention. Accordingly, those of ordinary skill in the art will recognize that variation changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, description of well-known functions and elements are omitted for clarity and conciseness.
Of course, the embodiments of the invention can be implemented in a variety of architectural platforms, operating and server systems, devices, systems, or applications. Any particular architectural layout or implementation presented herein is provided for purposes of illustration and comprehension only and is not intended to limit aspects of the invention.
It is within this context, that various embodiments of the invention are now presented with reference to the
In a possible embodiment, the criteria for the alarms management (others are possible) are the following:
With this approach, in case several alarms of different types arrive to the QMM system (10), this system (10) is able to manage them: attending first those which locate impairments quicker, the system (10) is later able to determine which nodes/links and LSPs/services are affected, and is able to correlate the rest of the alarms so they do not need to be considered. It is important to mention that having a higher detection or location time does not prevent alarms from the slowest modules to appear in the whole network scenario, since every tool can monitor different parameters and is more adequate for different purposes.
For the examplary network scenarios and use cases described below and shown in
If the system (10) finds any (which should normally be the case), it groups them according to the specific network segment or node affected (if already identified by the other alarms), and jumps into a step (f1) of consulting again the DataBase (36) for information on all the services that might be potentially affected by such events. The subsequent step (g1) is the DataBase (36) giving a Response.
On the contrary, if no other alarms are present or those present have not yet located the affected segment(s), then the QMM system (10) requests (d1) the MPLS OAM (34) mechanisms in the network nodes (31) to carry out specific on-demand operations to locate the fault, depending on the type of alarm received from the application layer (20). The complete definition of which operations are associated to which alarms is out of the scope of the invention. To provide just an example, if the alarm referred to long delay in an audio-conference service, then the MPLS OAM (34) tools could be those that measure the packet delay along the path. The tests carried out by the MPLS OAM (34) tools should be first related with the end-to-end path. In case the result (e1) from the MPLS OAM (34) mechanisms operations to locate the fault is adequate, then the system (10) declares that potential problems can be located within the customer premises. Resolution of such problems is also out of the scope of the invention. On the other hand, if the result of fault location from the MPLS OAM (34) is unsatisfactory, then, segment by segment testing is done to locate the specific segment or node affected by degradation. Tests, performed by the MPLS OAM (34), are triggered and controlled by the QMM system (10), which is the one having the information of the segments.
Once the fault is located by the MPLS OAM (34) and this location result (e1) is received by the QMM system (10), then the system (10) continues with steps (f1) and (g1) of query and answer respectively to/from the DataBase (36), equivalent to those ones described above. At this stage, the system (10) has a clear vision on what services can be affected by the different degradations, on it triggers (h1) MPLS signaling (35) to initiate the protection/restoration mechanisms for each of those services affected by the alarm. Results of the restoration procedures are provided in step (i1).
The QMM system (10) needs to check the correct operation of all the restored services, so it triggers (j1) on-demand monitoring mechanisms either at the application layer (20), if possible depending on availability of such tools at the different customers premises, or via MPLS OAM (31), which is always available. Results from testing are provided in step (k1).
In case some of the results from testing are unsatisfactory, the system (10) consults again (11) the DataBase (36) for alternative paths for those services, and repeats (in a loop) the execution of steps from (h1) to (k1) for those alternative paths.
It has to be noted that many networks have their own automatic restoration procedures, for example, when links are cut. On those cases, the system (10) is aware of such situation in steps (c1) or (g1), since the DataBase (36) already provides the information that one or several specific services have been automatically restored to a backup path. The QMM system (10) duties on such event grant that other services, possibly not able to automatically being recovered, are not affected either. The operation for them is equivalent to what has already been described in the use case of
At this stage, the QMM system (10) has a clear vision on what services can be affected by the degradation, so it triggers (h2) MPLS signaling (35) mechanisms to initiate the protection mechanisms for each of those services. Results of the restoration procedures are provided in step (i2). Finally, the QMM system (10) needs to check the correct operation of all the restored services, so it triggers on-demand monitoring mechanisms (j2) either at the application layer (20), if possible depending on availability of such tools at the different customers premises, or via MPLS OAM (31), which is always available. Results from testing are provided in step (k2). In case some of them are unsatisfactory, the QMM system (10) consults again (12) the DataBase (36) for alternative paths for those services and executes in a loop, if required, steps from (h2) to (k2) for those alternative paths. In networks with their own automatic restoration procedures, the QMM system (10) can only be aware of such situation in step (g2); for those cases, the system (10) duties are restricted to those services which cannot be automatically recovered. The operation for them is equivalent to what has already been described in the use case of
There is another possible use case in which an alarm arrives to the QMM system (10) from the passive traffic analyzer (32). Then, the specific workflow is equivalent to the one in the previous case, depicted in
For alarms coming from tools executed between network nodes (31) which are directly connected, the operation is similar to use cases shown in
For alarms coming from tools executed between network nodes (31) which are not directly connected, the operation is very similar to use case shown in
The QMM system (10) receives an alarm (a3) from the MPLS OAM (34) tools. There is no need to consult the network path in this case, i.e., avoiding steps from (b) to (c) in the basic flow of
It must be noted that the desired behavior for operators is to be in the “correct operation” zone, and that unexpected traffic growths just affect their networks in the sense that they temporarily enter the “potentially conflictive” zone. Stable traffic growth due to, for example, an increment in the number of customers or in the number of offered services, should be handled via other methods like investment in new equipment or revised network planning. it must also be noted that the definition of the threshold between zones is operator-dependent, and is out of the scope of this invention.
To avoid network congestion, the QMM system (10) initially uses the passive traffic analyzer (32) for passive monitoring, thus not consuming network bandwidth to detect “potentially conflictive” situations. SNMP protocol, for example, can monitor network bandwidth until a certain threshold is surpassed. At that moment, faster and more precise monitoring is needed, and provided via MPLS OAM (34) tools within the network segment which is “potentially conflictive”.
This type of monitoring is to detect and locate “critical” situations very quickly: since the network segment to monitor has been very much reduced, the bandwidth consumption problem is strictly controlled, and the amount of monitoring packets that can be injected can be high enough to ensure the adequate performance.
Passive monitoring tools of the passive traffic analyzer (are continuously measuring the network traffic, and in case they measure bandwidths that surpass the specified threshold for “potentially conflictive” situations, they generate an alarm (a4) to the QMM system (10), as shown in the flow chart of
It may happen that the threshold towards “critical” situations is never surpassed. Then, eventually, the passive traffic analyzer (32) that is still running can detect that the network segment has gone back to the “correct operation” zone and announces this to the QMM system (10), which in turn stops the active monitoring of the MPLS OAM (34) tools.
In case the threshold towards “critical” situations is surpassed, the MPLS OAM (34) tools announces it (e4) to the QMM system (10), which in turn starts a similar procedure, steps from (f4) to (l4) to the one in other use cases, for example in the use case of receiving alarms from MPLS OAM (34) shown in
Finally, eventually the passive traffic analyzer (32) can determine “correct operation”, and then it is possible to migrate back the services to the original paths, once again without any traffic loss.
Internal Modules QMM System (10)
Computation Module (100), CM: constitutes the brain and intelligence of the system (10) and is in charge of coordinating all the executed operations in the different possible use cases, as described before. In particular:
Service Layer, Network Layer, DDBB, Operator COMM modules (101, 102, 103, 105) and Signaling Scheduler (104) module interface external systems. The common objective of such modules (101, 102, 103, 104, 105) is to hide from the QMM processing modules the particular details of potentially different implementations of the external interfaces, unifying the communications towards inner modules. For example, the System Database (36) can be implemented using different technologies, and thus, DDBB DBCOMM interface (203) can present different technical implementations, all giving support to the same set of requirements. The DDBB COMM module (103) is then in charge of translating the different format messages, providing unified messages over CM DBCOMM interface (212).
Service Layer COMM (101), SLCOMM: interfaces the Service Support System (21) to receive alarms or request active testing at the service layer. Received alarms are then sent to the Alarm Management & Correlation module (106), while active tests triggering is done at the Active Monitoring Trigger module (107).
Network Layer COMM (102), NLCOMM: interfaces the network nodes to receive alarms from different external systems: i) Physical Layer Monitoring (33), ii) Passive Traffic Analyzer (32) and/or iii) MPLS OAM (34). It may also request active MPLS OAM testing or an on-demand passive poll. Received alarms are sent to the Alarm Management & Correlation module (106), module which also triggers the passive on-demand poll. On the other hand, active tests triggering is done at the Active Monitoring Trigger module (107).
DDBB COMM (103) DBCOMM: interfaces the System Database (36) to receive information regarding the network/service status or regarding new paths over which to provision restored services. This information is requested by the Computation Module (100). The Computation Module (100) can also populate, via this module, the System Database (36) with network/service status changes that the QMM system (10) has detected.
Signaling Scheduler (104), SS: interfaces the MPLS Signaling (35) functionalities available in the network to permit restoration procedures, at the request of the Computation Module (100). These functionalities, in the simplest implementation, could be accessed via a management network using the network nodes Command Line Interface or CLI. Alternative more sophisticated solutions providing equivalent features are valid.
Operator COMM (105), OCOMM: provides an interface for the operator (700) to configure both the priority levels of the different alarms that could be received and the thresholds between the operation zones for use case in which the QMM system (10) operates proactively, values which are stored in the Configuration module (109). Its external interface permits the operator (700) to consult information about the occurred alarms and performed actions as well, information coming from the Logs Storage module (110).
The rest of the processing internal modules of the QMM system (10) are:
Alarm Management & Correlation (106) AMC: this module is in charge of processing the different alarms received from the external modules, via the Service Layer and Network Layer COMM modules (101, 102). Upon an alarm reception, it determines the priority according to the values provided by the Configuration module (109), and executes the correlation algorithm associated with that priority (basically, it checks for alarms with less priority making reference to the same fault). The grouped alarms are then sent to the Computation Module (100), so it can start procedures as stated in the use cases description. The correlation process is governed by a Synchronization Clock (108), so it is made sure that alarms separated in time are treated differently. Operation of this module for a specific alarm may be delayed in case an alarm with a higher priority arrives, if it is not capable of treating them in parallel. Finally, the Alarm Management & Correlation (106) is also in charge of polling the external Passive Traffic Analyzer (32), via the Network Layer COMM module (102), as requested by the Computation Module (100), for on-demand operation mode of the passive monitoring tools.
Active Monitoring Trigger (107), AMT: this module is in charge of prompting the active tests available in the external systems, in particular, in the Service Support System (21) for tests at the service layer or using the MPLS OAM (34) tools of the network nodes. Communication with the former is made across the Service Layer COMM module (101), while the Network Layer COMM module (102) permits communication with the latter. The execution of external active tests is requested by the Computation Module (100), and results are provided back by the Active Monitoring Trigger (107).
Synchronization Clock (108), SC: it provides the clock for the synchronization of the correlation procedures carried out at the Alarm Management & Correlation module (106).
Configuration (109), CONF: it stores the configuration parameters provided by the operator for the priority values to be given to each of the potentially to be received alarms, and for the two thresholds separating the operation zones in use case in which the QMM system (10) operates proactively. First set of parameters is then forwarded to the Alarm Management & Correlation module (106), while the second is forwarded to the Computation Module (100).
Logs Storage (110), LS: it stores information about occurred alarms and executed associated corrective actions, information which is provided by the Computation Module (100), prior to its presentation to the operator (700), via the Operator COMM module (105).
Internal Interfaces of the QMM System (10)
SLCOMM AMC Interface (206) and NLCOMM AMC Interface (207):
Both interfaces share the same procedure: To forward all the alarms received from external monitoring systems towards the Alarm Management & Correlation module (106). The format of the messages differs depending on the specific external module generating the alarm, since in each procedure different types of information are available; in particular, whenever the “fault location” information is available, it should be added to the message body. The response message from the Alarm Management & Correlation module (106) is an acknowledgement of reception.
Moreover, the NLCOMM—AMC Interface (207) also permits another procedure: The Alarm Management & Correlation module (106) to request a certain type of external passive measurement at the network nodes. The request message must include: i) the network node/interface where the measurement should be done, ii) the type of measurement to be done, e.g., consumed bandwidth, and iii) for how long or how many repetitions should be done. The input for the last parameter could be in the form of “until a certain threshold is surpassed”, as required by use case in which the QMM system (10) operates proactively. The response message from the Network Layer COMM module (102) provides the result of the requested measurement.
SLCOMM-AMT Interface (208) and NLCOMM-AMT Interface (209): Both interfaces share the same procedure:
The Active Monitoring Trigger module (107) to request a certain type of external active measurement, either by the Service Support System (21) or by the MPLS OAM (34) mechanisms of the network nodes (31). The request message must include: i) the specific service—in case of service layer monitoring or the network segment/node/interface—in case of network layer monitoring—to be tested, ii) the type of measurement to be done, e.g., experienced delay, and iii) for how long or how many repetitions should be done. The input for the last parameter can be in the form of “until a certain threshold is surpassed”, as required by use case in which the QMIV1 system (10) operates proactively. The response messages from the Service Layer and Network layer COMM modules (101, 102) provide the result of the requested measurement.
CM AMC Interface (210): it permits two procedures:
The Alarm Management & Correlation module (106) to send sets of correlated alarms to the Computation Module (100). The format of these messages differs depending on the specific external module generating the alarm, as stated also for the SLCOMM-AMC and NLCOMM-AMC Interfaces (206, 207). The response message from the Computation Module (100) is an acknowledgement of reception.
The Computation Module (100) to request a certain type of external passive measurement to the Alarm Management & Correlation module (106). The format of the request and response messages should match a scheme equivalent to the second procedure in the NLCOMM-AMC Interface (207).
CM-AMT Interface (211): It permits one procedure.
The Computation Module (100) to request a certain type of external active measurement to the Active Monitoring Trigger module (107). The request message includes the same information as for the SLCOMM AMT (208) or NLCOMM AMC (209) interfaces, with an additional field to specify the external element to carry out the measurement, i.e., if it needs to be handled by the application layer probes or the MPLS OAM (34) mechanisms. The response message from the Active Monitoring Trigger module (107) provides the result of the requested measurement.
CM-DBCOMM Interface (212): It permits four types of procedures, three requests from the Computation Module (100) to the DBCOMM module (103), and one informational, in the same direction:
CM-SS Interface (213): It permits one procedure:
The Computation Module (100) to request a restoration operation to the Signaling Scheduler (104). The request message must include: i) the specific service(s) which need to be restored, and ii) the network path over which these services should be restored. It must be noted, therefore, that services can he grouped into a single request when they share the same new path. Services affected by the same fault, but restored over different paths, generate different requests in this interface. Response from the SS module (104) includes the result of the restoration operation (successfully accomplished or not, and the reason in the latter case).
The Operator COMM module (105) to store in the Configuration module (109) the priority values set by the operator (700) for the different external alarms available in the monitoring system. The message includes an unrepeated integer value per each of the types of alarm, and the response is an acknowledgement of reception.
The Operator COMM module (105) to store in the Configuration module (109) the two threshold values separating the three operation zones defined in use case in which the QMM system (10) operates proactively. The message includes two values between 0 and 100, corresponding to the link bandwidth usage values that separate such zones. The response is an acknowledgement of reception.
CONF-AMC Interface (215): It permits one procedure:
The Configuration Module (109) to store in the Alarm Management & Correlation module (106) the priority values of the different types of alarms that the system can receive, values which are configurable by the operator (700). In other words, this is a sort of relay of the first procedure in the OCOMM CONF Interface (214). Response is an acknowledgement of reception.
CONF-CM Interface (216): It permits one procedure:
The Configuration Module (109) stores in the Computation Module (100) the threshold values that define the operation zones (use case in which the QMM system (10) operates proactively), values which are configurable by the operator (700). The message includes two values, separating the “correct” and “potentially conflictive” zones on one side, and the latter with the “critical” zone on the other. Again, it is a sort of relay, in this case of the second procedure in the OCOMM CONF Interface (214), Response is an acknowledgement of reception.
OCOMM-LS Interface (217): it permits one procedure:
The Operator COMM module (105) to request the Logs Storage module (110) the information that permits having a clear knowledge of what events have happened, and which corrective actions have been taken by the QMM system (10), at the request from the operator (700). Response is a list of events and associated actions.
LS-CM Interface (218): it permits one procedure:
The Computation Module (100) to store in the Logs Storage module (110) all the information required by operators (700), as stated in the OCOMM LS Interface (217). Response is an acknowledgement of reception.
SC-AMC Interface (219): It permits one procedure:
The Synchronization Clock (108) to provide the timing for the correlation procedures in the Alarm Management & Correlation module (106). This is a continuous clock signal with no specific messages being interchanged.
External Interfaces of the QMM System (10)
External interfaces are interfaces permitting communication with external systems that may present many different kinds of interface implementations. This way, the internal specific procedures of the QMM system (10) are hidden of the details of external systems implementation technologies, and share unified message formats. This way, a new interface implementation from an external module just demands modifications in the COMM modules and interfaces of the QMM system (10).
SSS-SLCOMM Interface (201): is the source of the service layer alarms relayed by SLCOMM-AMC interface (206), and relays the active service layer measurement requests coming from SLCOMM-AMT interface (208).
NN-NLCOMM interface (202): is the source of the network layer alarms relayed by NLCOMM-AMC interface (207), and relays the passive and active network layer measurement requests coming from NLCOMM-AMC and NLCOMM-AMT interfaces (207, 209).
DDBB-DBCOMM Interface (203): relays the requests and informational messages coming from CM-DBCOMM interface (212).
MPLS Sig-SS Interface (204): relays the requests coming from CM-SS interface (213).
Operator-OCOMM Interface (205): is the source of the configurable parameters relayed through OCOMM CONF interface (214), and of the requests from operator (700) for logs information, relayed through interface OCOMM LS interface (217).
Note that in this text, the term “comprises” and its derivations (such as “comprising”, etc.) should not be understood in an excluding sense, that is, these terms should not be interpreted as excluding the possibility that what is described and defined may include further elements, steps, etc.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/ES2013/070929 | 12/26/2013 | WO | 00 |