Various example embodiments relate to performance related management of artificial intelligence or machine learning pipelines in cross-domain scenarios. More specifically, various example embodiments exemplarily relate to measures (including methods, apparatuses and computer program products) for realizing performance related management of artificial intelligence or machine learning pipelines in cross-domain scenarios.
The present specification generally relates to artificial intelligence (AI)/machine learning (ML) pipelines in cross-domain scenarios and the management thereof in particular for interoperable and multi-vendor environments.
An AI or ML pipeline helps to automate AI/ML workflows by splitting them into independent, reusable and modular components that can then be pipelined together to create a (trained) (AI/ML) model. An AI/ML pipeline is not a one-way flow, i.e., it is iterative, and every step is repeated to continuously improve the accuracy of the model and achieve a successful algorithm.
An AI/ML workflow might consist of at least the following three components illustrated in
With AI/ML pipelining and the recent push for microservices architectures (e.g., containers or container virtualization), each AI/ML workflow component is abstracted into an independent service that relevant stakeholders (e.g., data engineers, data scientists) can independently work on.
Besides, an AI/ML pipeline orchestrator shown in
Subsequently, some basics of trustworthy artificial intelligence are explained.
For AI/ML systems to be widely accepted, they should be trustworthy in addition to meeting performance requirements (e.g., accuracy). The High-level Expert Group (HLEG) on AI has developed the European Commission's Trustworthy AI (TAI) strategy.
In April 2021, the European Commission presented the EU Artificial Intelligence Act or the regulatory framework for AI by setting out horizontal rules for the development, commodification and use of AI-driven products, services and systems within the territory of the EU. The Act seeks to codify the high standards of the EU Trustworthy AI paradigm, which requires AI to be legally, ethically and technically robust, while respecting democratic values, human rights and the rule of law. The draft regulation provides seven critical Trustworthy AI requirements for high-risk AI systems that apply to all industries:
Additionally, International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC) has also published a technical report on ‘Overview of trustworthiness in artificial intelligence’. Early efforts in the open-source community are also visible towards developing TAI frameworks/tools/libraries such as IBM AI360, Google Explainable AI and TensorFlow Responsible AI.
However, while such knowledge in relation to trustworthiness of AI/ML exist, no approaches for implementing control and evaluation of performance of AI/ML pipelines in cross-domain management and orchestration architectures are known.
Hence, the problem arises that control and evaluation of performance of AI/ML pipelines in cross-domain scenarios in particular for interoperable and multi-vendor environments is to be provided.
Hence, there is a need to provide for performance related management of artificial intelligence or machine learning pipelines in cross-domain scenarios.
Various example embodiments aim at addressing at least part of the above issues and/or problems and drawbacks.
Various aspects of example embodiments are set out in the appended claims.
According to an exemplary aspect, there is provided a method of a first network entity managing artificial intelligence or machine learning pipelines in a plurality of network domains including a first network domain in a network, the method comprising transmitting a first artificial intelligence or machine learning performance related message towards a second network entity managing lifecycles of artificial intelligence or machine learning pipelines in said first network domain in said network, and receiving a second artificial intelligence or machine learning performance related message from said second network entity, wherein said first artificial intelligence or machine learning performance related message comprises a first information element including at least one first cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter.
According to an exemplary aspect, there is provided a method of a second network entity managing lifecycles of artificial intelligence or machine learning pipelines in a first network domain in a network, the method comprising receiving a first artificial intelligence or machine learning performance related message from a first network entity managing artificial intelligence or machine learning pipelines in a plurality of network domains including said first network domain in said network, and transmitting a second artificial intelligence or machine learning performance related message towards said first network entity, wherein said first artificial intelligence or machine learning performance related message comprises a first information element including at least one first cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter.
According to an exemplary aspect, there is provided a method of a third network entity responsible for fulfillment of network operator specifications in a first network domain in a network, the method comprising receiving a third artificial intelligence or machine learning performance related message from a first network entity managing artificial intelligence or machine learning pipelines in a plurality of network domains including said first network domain in said network, and transmitting a fourth artificial intelligence or machine learning performance related message towards said first network entity, wherein said third artificial intelligence or machine learning performance related message comprises a third information element including at least one third cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter, said third artificial intelligence or machine learning performance related message is a cross-domain performance configuration request, and said fourth artificial intelligence or machine learning performance related message is a cross-domain performance configuration response.
According to an exemplary aspect, there is provided an apparatus of a first network entity managing artificial intelligence or machine learning pipelines in a plurality of network domains including a first network domain in a network, the apparatus comprising transmitting circuitry configured to transmit a first artificial intelligence or machine learning performance related message towards a second network entity managing lifecycles of artificial intelligence or machine learning pipelines in said first network domain in said network, and receiving circuitry configured to receive a second artificial intelligence or machine learning performance related message from said second network entity, wherein said first artificial intelligence or machine learning performance related message comprises a first information element including at least one first cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter.
According to an exemplary aspect, there is provided an apparatus of a second network entity managing lifecycles of artificial intelligence or machine learning pipelines in a first network domain in a network, the apparatus comprising receiving circuitry configured to receive a first artificial intelligence or machine learning performance related message from a first network entity managing artificial intelligence or machine learning pipelines in a plurality of network domains including said first network domain in said network, and transmitting circuitry configured to transmit a second artificial intelligence or machine learning performance related message towards said first network entity, wherein said first artificial intelligence or machine learning performance related message comprises a first information element including at least one first cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter.
According to an exemplary aspect, there is provided an apparatus of a third network entity responsible for fulfillment of network operator specifications in a first network domain in a network, the apparatus comprising receiving circuitry configured to receive a third artificial intelligence or machine learning performance related message from a first network entity managing artificial intelligence or machine learning pipelines in a plurality of network domains including said first network domain in said network, and transmitting circuitry configured to transmit a fourth artificial intelligence or machine learning performance related message towards said first network entity, wherein said third artificial intelligence or machine learning performance related message comprises a third information element including at least one third cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter, said third artificial intelligence or machine learning performance related message is a cross-domain performance configuration request, and said fourth artificial intelligence or machine learning performance related message is a cross-domain performance configuration response.
According to an exemplary aspect, there is provided an apparatus of a first network entity managing artificial intelligence or machine learning pipelines in a plurality of network domains including a first network domain in a network, the apparatus comprising at least one processor, at least one memory including computer program code, and at least one interface configured for communication with at least another apparatus, the at least one processor, with the at least one memory and the computer program code, being configured to cause the apparatus to perform transmitting a first artificial intelligence or machine learning performance related message towards a second network entity managing lifecycles of artificial intelligence or machine learning pipelines in said first network domain in said network, and receiving a second artificial intelligence or machine learning performance related message from said second network entity, wherein said first artificial intelligence or machine learning performance related message comprises a first information element including at least one first cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter.
According to an exemplary aspect, there is provided an apparatus of a second network entity managing lifecycles of artificial intelligence or machine learning pipelines in a first network domain in a network, the apparatus comprising at least one processor, at least one memory including computer program code, and at least one interface configured for communication with at least another apparatus, the at least one processor, with the at least one memory and the computer program code, being configured to cause the apparatus to perform receiving a first artificial intelligence or machine learning performance related message from a first network entity managing artificial intelligence or machine learning pipelines in a plurality of network domains including said first network domain in said network, and transmitting a second artificial intelligence or machine learning performance related message towards said first network entity, wherein said first artificial intelligence or machine learning performance related message comprises a first information element including at least one first cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter.
According to an exemplary aspect, there is provided an apparatus of a third network entity responsible for fulfillment of network operator specifications in a first network domain in a network, the apparatus comprising at least one processor, at least one memory including computer program code, and at least one interface configured for communication with at least another apparatus, the at least one processor, with the at least one memory and the computer program code, being configured to cause the apparatus to perform receiving a third artificial intelligence or machine learning performance related message from a first network entity managing artificial intelligence or machine learning pipelines in a plurality of network domains including said first network domain in said network, and transmitting a fourth artificial intelligence or machine learning performance related message towards said first network entity, wherein said third artificial intelligence or machine learning performance related message comprises a third information element including at least one third cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter, said third artificial intelligence or machine learning performance related message is a cross-domain performance configuration request, and said fourth artificial intelligence or machine learning performance related message is a cross-domain performance configuration response.
According to an exemplary aspect, there is provided a computer program product comprising computer-executable computer program code which, when the program is run on a computer (e.g. a computer of an apparatus according to any one of the aforementioned apparatus-related exemplary aspects of the present disclosure), is configured to cause the computer to carry out the method according to any one of the aforementioned method-related exemplary aspects of the present disclosure.
Such computer program product may comprise (or be embodied) a (tangible) computer-readable (storage) medium or the like on which the computer-executable computer program code is stored, and/or the program may be directly loadable into an internal memory of the computer or a processor thereof.
Any one of the above aspects enables an efficient control and evaluation of performance of AI/ML pipelines in cross-domain scenarios in particular for interoperable and multi-vendor environments to thereby solve at least part of the problems and drawbacks identified in relation to the prior art.
By way of example embodiments, there is provided performance related management of artificial intelligence or machine learning pipelines in cross-domain scenarios. More specifically, by way of example embodiments, there are provided measures and mechanisms for realizing performance related management of artificial intelligence or machine learning pipelines in cross-domain scenarios.
Thus, improvement is achieved by methods, apparatuses and computer program products enabling/realizing performance related management of artificial intelligence or machine learning pipelines in cross-domain scenarios.
In the following, the present disclosure will be described in greater detail by way of non-limiting examples with reference to the accompanying drawings, in which
The present disclosure is described herein with reference to particular non-limiting examples and to what are presently considered to be conceivable embodiments. A person skilled in the art will appreciate that the disclosure is by no means limited to these examples, and may be more broadly applied.
It is to be noted that the following description of the present disclosure and its embodiments mainly refers to specifications being used as non-limiting examples for certain exemplary network configurations and deployments. Namely, the present disclosure and its embodiments are mainly described in relation to 3GPP specifications being used as non-limiting examples for certain exemplary network configurations and deployments. As such, the description of example embodiments given herein specifically refers to terminology which is directly related thereto. Such terminology is only used in the context of the presented non-limiting examples, and does naturally not limit the disclosure in any way. Rather, any other communication or communication related system deployment, etc. may also be utilized as long as compliant with the features described herein.
Hereinafter, various embodiments and implementations of the present disclosure and its aspects or embodiments are described using several variants and/or alternatives. It is generally noted that, according to certain needs and constraints, all of the described variants and/or alternatives may be provided alone or in any conceivable combination (also including combinations of individual features of the various variants and/or alternatives).
According to example embodiments, in general terms, there are provided measures and mechanisms for (enabling/realizing) performance related management of artificial intelligence or machine learning pipelines in cross-domain scenarios.
A framework for TAI in cognitive autonomous networks (CAN) underlies example embodiments.
As shown in
Considering that the AI pipelines deployed in the network may belong to multiple vendors, according to example embodiments, application programming interfaces (API) are exposed by the vendor-specific AI pipelines (without compromising the vendor's intellectual property rights) towards the AI pipeline orchestrator and the AI trust engine to discover the performance and trust capabilities of the AI pipeline, configure the AI pipeline according to the required AI QoS and AI QoT requirements, and to monitor/collect AI performance and AI trust related metrics from the AI pipeline.
Heretofore, according to the trustworthy AI/ML framework for CANs underlying example embodiments, APIs required for the AI trust engine to discover the AI trustworthiness capabilities via the AI trust manager of the AI pipeline, to configure the AI pipeline according to the required AI QoT via the AI trust manager, and to monitor/collect AI trustworthiness metrics and/or AI explanations related to the AI pipeline via the AI trust manager may be provided. Further, according to according to the trustworthy AI/ML framework for CANs underlying example embodiments, APIs required for the AI pipeline orchestrator to discover the performance capabilities of the AI pipeline via the AI performance manager of the AI pipeline, to (re) configure the AI pipeline according to the required AI QoS via the AI trust manager, and to monitor/collect AI performance metrics related to the AI pipeline via the AI performance manager may be provided.
Example embodiments are outlined considering such cross-domain management and orchestration architecture as shown in
In the illustrates cross-domain E2E network service example scenario, the cross-domain service management domain (CDSMD) (e.g., E2E service management domain) is responsible for decomposing the cross-domain E2E network service request (as per the service level agreement (SLA)), received from the network operator or the customer (via cross-domain policy/intent manager), into domain-specific (e.g., RAN, transport, core) network resource/service requirements, and for communicating them to the corresponding individual management domains (MD). Then, the individual MDs are responsible for ensuring that the domain-specific resource/service requirements are fulfilled, within their corresponding domains, by continuously monitoring the resource/service related key performance indicators (KPI) and reporting them to the CDSMD.
In the illustrated cross-domain E2E network service example scenario, the requested/instantiated cross-domain E2E network service, e.g. covering RAN, transport and core domains, may be managed by their corresponding AI pipelines (or cognitive network functions (CNF)) in the respective MDs. It is to be noted that, depending on the use case, the AI pipeline may be instantiated either in the domain-specific MDs (e.g., for proactive resource autoscaling) or within the domain itself (e.g., for proactive mobility handover in RAN domain).
Leveraging the domain-specific AI pipeline orchestrator and the AI pipeline-specific AI performance manager of the trustworthy AI/ML framework for CANs underlying example embodiments, the AI pipeline performance for the domain-specific AI pipelines may be defined, configured, measured and reported within the corresponding MD.
However, there is no way for the cross-domain AI pipeline orchestrator (within the CDSMD) to receive the desired cross-domain AI QoS (i.e., defined by the cross-domain policy/intent manager).
Consequently, there is no way for the CDSMD to
In addition thereto, there was no way for the CDSMD to address (e.g., performing root-cause analysis) the AI performance related escalations, belonging to a cross-domain E2E network service, potentially received from the domain-specific AI pipeline orchestrator(s), and there was no way to delegate the relevant AI performance related escalation information potentially received from the domain-specific AI pipeline orchestrator of one MD to another MD so that the other MD may take preventive measures to avoid cross-domain E2E network service SLA violations (in the considered case: cross-domain AI QoS). Moreover, there was also no way for the CDSMD to aggregate the AI performance related escalation metrics potentially received from the AI pipeline orchestrator(s) of individual MDs to provide a global view of an issue (in the considered case: cross-domain AI QoS violations) to the network operator or the customer.
Even such cross-domain management and orchestration architecture with a cross-domain AI trust engine (potentially providing cross-domain trust APIs between domain-specific AI trust engine and cross-domain AI trust engine) does not foresee cross-domain performance related APIs between a domain-specific AI pipeline orchestrator and a cross-domain AI pipeline orchestrator.
In view of the above, in brief, according to example embodiments, cross-domain performance related APIs are provided between a domain-specific AI pipeline orchestrator and a cross-domain AI pipeline orchestrator in order to allow for control and evaluation of performance of AI/ML pipelines in cross-domain scenarios in particular for interoperable and multi-vendor environments.
According to example embodiments, the cross-domain trustworthy AI/ML framework for cognitive autonomous networks underlying example embodiments is extended in order to facilitate the discovery, configuration, monitoring and reporting of cross-domain network service-related AI pipelines performance for interoperable and multi-vendor environments. A customer intent corresponding to a network service may include cross-domain AI QOS requirements in addition to the cross-domain QoT requirements, and the cross-domain TAI framework is used to ensure the fulfilment of desired cross-domain AI QoS requirements.
As shown in
According to example embodiments, the cross-domain AI pipeline orchestrator may consequently support the following operations:
To facilitate these functionalities, according to example embodiments, the following cross-domain APIs (produced by domain-specific AI pipeline orchestrator(s) and consumed by cross-domain AI pipeline orchestrator) are provided:
Example embodiments are specified below in more detail.
As shown in
In an embodiment at least some of the functionalities of the apparatus shown in
According to further example embodiments, said first artificial intelligence or machine learning performance related message is a cross-domain performance capability information request, said second artificial intelligence or machine learning performance related message is a cross-domain performance capability information response, and said second artificial intelligence or machine learning performance related message comprises a second information element including at least one second cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter.
According to further example embodiments, said at least one first cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter includes at least one of first domain scope information indicative of said first network domain, first scope information indicative of at least one artificial intelligence or machine learning pipeline in said first network domain to which said cross-domain performance capability information request relates, first phase information indicative of at least one artificial intelligence or machine learning pipeline phase to which said cross-domain performance capability information request relates, and customer information indicative of a customer or a category of said customer for which said at least one artificial intelligence or machine learning pipeline in said first network domain to which said cross-domain performance capability information request relates is to be envisaged.
According to further example embodiments, said at least one second cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter includes at least one capability information entry, wherein each respective capability information entry of said at least one capability information entry includes at least one of second domain scope information indicative of said first network domain, second scope information indicative of an artificial intelligence or machine learning pipeline in said first network domain to which said respective capability information entry relates, second phase information indicative of at least one artificial intelligence or machine learning pipeline phase to which said respective capability information entry relates, configuration information indicative of at least one configuration option supported for said artificial intelligence or machine learning pipeline to which said respective capability information entry relates, and performance metrics information indicative of at least one performance metric supported for said at least one artificial intelligence or machine learning pipeline phase of said artificial intelligence or machine learning pipeline to which said respective capability information entry relates.
According to a variation of the procedure shown in
According to a variation of the procedure shown in
According to further example embodiments, said first artificial intelligence or machine learning performance related message is a cross-domain performance configuration request, and said second artificial intelligence or machine learning performance related message is a cross-domain performance configuration response.
According to further example embodiments, said at least one first cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter includes at least one configuration entry, wherein each respective configuration entry of said at least one configuration entry includes at least one of domain scope information indicative of said first network domain, scope information indicative of an artificial intelligence or machine learning pipeline in said first network domain to which said respective configuration entry relates, phase information indicative of at least one artificial intelligence or machine learning pipeline phase to which said respective configuration entry relates, at least one of said domain-specific artificial intelligence or machine learning quality of service requirements, method trigger information indicative of at least one to-be-triggered configurable method of said artificial intelligence or machine learning pipeline to which said respective configuration entry relates, and performance metrics configuration information indicative of at least one to-be-configured performance metric for said at least one artificial intelligence or machine learning pipeline phase of said artificial intelligence or machine learning pipeline to which said respective configuration entry relates.
According to a variation of the procedure shown in
According to further example embodiments, said at least one third cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter includes at least one configuration entry, wherein each respective configuration entry of said at least one configuration entry includes at least one of domain scope information indicative of said first network domain, scope information indicative of an artificial intelligence or machine learning pipeline in said first network domain to which said respective configuration entry relates, and at least one of said domain-specific artificial intelligence or machine learning quality of service requirements.
According to further example embodiments, said first artificial intelligence or machine learning performance related message is a cross-domain performance report request, said second artificial intelligence or machine learning performance related message is a cross-domain performance report response, and said second artificial intelligence or machine learning performance related message comprises a second information element including at least one second cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter.
According to further example embodiments, said at least one first cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter includes at least one of domain scope information indicative of said first network domain, scope information indicative of an artificial intelligence or machine learning pipeline in said first network domain to which said cross-domain performance report request relates, phase information indicative of at least one artificial intelligence or machine learning pipeline phase to which said cross-domain performance report request relates, a list indicative of performance metrics demanded to be reported, start time information indicative of a begin of a timeframe for which reporting is demanded with said cross-domain performance report request, stop time information indicative of an end of said timeframe for which reporting is demanded with said cross-domain performance report request, and periodicity information indicative of a periodicity interval with which reporting is demanded with said cross-domain performance report request.
According to further example embodiments, said at least one second cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter includes at least one of demanded performance metrics.
According to further example embodiments, said first artificial intelligence or machine learning performance related message is a cross-domain performance subscription, said second artificial intelligence or machine learning performance related message is a cross-domain performance notification, and said second artificial intelligence or machine learning performance related message comprises a second information element including at least one second cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter.
According to further example embodiments, said at least one first cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter includes at least one of domain scope information indicative of said first network domain, scope information indicative of an artificial intelligence or machine learning pipeline in said first network domain to which said cross-domain performance subscription relates, phase information indicative of at least one artificial intelligence or machine learning pipeline phase to which said cross-domain performance subscription relates, a list indicative of performance metrics demanded to be reported, and at least one reporting threshold corresponding to at least one of said performance metrics demanded to be reported.
According to further example embodiments, said at least one second cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter includes demanded performance metrics.
As shown in
In an embodiment at least some of the functionalities of the apparatus shown in
According to further example embodiments, said first artificial intelligence or machine learning performance related message is a cross-domain performance capability information request, said second artificial intelligence or machine learning performance related message is a cross-domain performance capability information response, and said second artificial intelligence or machine learning performance related message comprises a second information element including at least one second cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter.
According to further example embodiments, said at least one first cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter includes at least one of first domain scope information indicative of said first network domain, first scope information indicative of at least one artificial intelligence or machine learning pipeline in said first network domain to which said cross-domain performance capability information request relates, first phase information indicative of at least one artificial intelligence or machine learning pipeline phase to which said cross-domain performance capability information request relates, and customer information indicative of a customer or a category of said customer for which said at least one artificial intelligence or machine learning pipeline in said first network domain to which said cross-domain performance capability information request relates is to be envisaged.
According to further example embodiments, said at least one second cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter includes at least one capability information entry, wherein each respective capability information entry of said at least one capability information entry includes at least one of second domain scope information indicative of said first network domain, second scope information indicative of an artificial intelligence or machine learning pipeline in said first network domain to which said respective capability information entry relates, second phase information indicative of at least one artificial intelligence or machine learning pipeline phase to which said respective capability information entry relates, configuration information indicative of at least one configuration option supported for said artificial intelligence or machine learning pipeline to which said respective capability information entry relates, and performance metrics information indicative of at least one performance metric supported for said at least one artificial intelligence or machine learning pipeline phase of said artificial intelligence or machine learning pipeline to which said respective capability information entry relates.
According to further example embodiments, said first artificial intelligence or machine learning performance related message is a cross-domain performance configuration request, and said second artificial intelligence or machine learning performance related message is a cross-domain performance configuration response.
According to further example embodiments, said at least one first cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter includes at least one configuration entry, wherein each respective configuration entry of said at least one configuration entry includes at least one of domain scope information indicative of said first network domain, scope information indicative of an artificial intelligence or machine learning pipeline in said first network domain to which said respective configuration entry relates, phase information indicative of at least one artificial intelligence or machine learning pipeline phase to which said respective configuration entry relates, at least one of domain-specific artificial intelligence or machine learning quality of service requirements, method trigger information indicative of at least one to-be-triggered configurable method of said artificial intelligence or machine learning pipeline to which said respective configuration entry relates, and performance metrics configuration information indicative of at least one to-be-configured performance metric for said at least one artificial intelligence or machine learning pipeline phase of said artificial intelligence or machine learning pipeline to which said respective configuration entry relates.
According to further example embodiments, said first artificial intelligence or machine learning performance related message is a cross-domain performance report request, said second artificial intelligence or machine learning performance related message is a cross-domain performance report response, and said second artificial intelligence or machine learning performance related message comprises a second information element including at least one second cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter.
According to further example embodiments, said at least one first cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter includes at least one of domain scope information indicative of said first network domain, scope information indicative of an artificial intelligence or machine learning pipeline in said first network domain to which said cross-domain performance report request relates, phase information indicative of at least one artificial intelligence or machine learning pipeline phase to which said cross-domain performance report request relates, a list indicative of performance metrics demanded to be reported, start time information indicative of a begin of a timeframe for which reporting is demanded with said cross-domain performance report request, stop time information indicative of an end of said timeframe for which reporting is demanded with said cross-domain performance report request, and periodicity information indicative of a periodicity interval with which reporting is demanded with said cross-domain performance report request.
According to further example embodiments, said at least one second cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter includes at least one of demanded performance metrics.
According to further example embodiments, said first artificial intelligence or machine learning performance related message is a cross-domain performance subscription, said second artificial intelligence or machine learning performance related message is a cross-domain performance notification, and said second artificial intelligence or machine learning performance related message comprises a second information element including at least one second cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter.
According to further example embodiments, said at least one first cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter includes at least one of domain scope information indicative of said first network domain, scope information indicative of an artificial intelligence or machine learning pipeline in said first network domain to which said cross-domain performance subscription relates, phase information indicative of at least one artificial intelligence or machine learning pipeline phase to which said cross-domain performance subscription relates, a list indicative of performance metrics demanded to be reported, and at least one reporting threshold corresponding to at least one of said performance metrics demanded to be reported.
According to further example embodiments, said at least one second cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter includes demanded performance metrics.
As shown in
In an embodiment at least some of the functionalities of the apparatus shown in
According to further example embodiments, said at least one third cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter includes at least one configuration entry, wherein each respective configuration entry of said at least one configuration entry includes at least one of domain scope information indicative of said first network domain, scope information indicative of an artificial intelligence or machine learning pipeline in said first network domain to which said respective configuration entry relates, and at least one of domain-specific artificial intelligence or machine learning quality of service requirements.
Example embodiments outlined and specified above are explained below in more specific terms.
In
In a step 1 of
In steps 2 to 6 of
In a step 7 of
In a step 8 of
Steps 9 to 13 of
In a step 9 of
In a step 10 of
In a step 11 of
In steps 12 and 13 of
Steps 14 to 21 of
Here, steps 14 to 16 of
In a step 14 of
In a step 15 of
In a step 16 of
As mentioned above, steps 17 to 21 of
In a step 17 of
In a step 18 of
In a step 19 of
In a step 20 of
In a step 21 of
Steps 22 to 24 of
In a step 22 of
Alternatively, in the step 22 of
In a step 23 of
In a step 24 of
Alternatively, in the step 24 of
The above-described procedures and functions may be implemented by respective functional elements, processors, or the like, as described below.
In the foregoing exemplary description of the network entity, only the units that are relevant for understanding the principles of the disclosure have been described using functional blocks. The network entity may comprise further units that are necessary for its respective operation. However, a description of these units is omitted in this specification. The arrangement of the functional blocks of the devices is not construed to limit the disclosure, and the functions may be performed by one block or further split into sub-blocks.
When in the foregoing description it is stated that the apparatus, i.e. network node or entity (or some other means) is configured to perform some function, this is to be construed to be equivalent to a description stating that a (i.e. at least one) processor or corresponding circuitry, potentially in cooperation with computer program code stored in the memory of the respective apparatus, is configured to cause the apparatus to perform at least the thus mentioned function. Also, such function is to be construed to be equivalently implementable by specifically configured circuitry or means for performing the respective function (i.e. the expression “unit configured to” is construed to be equivalent to an expression such as “means for”).
In
The processor 1411/1431/1441 and/or the interface 1413/1433/1443 may also include a modem or the like to facilitate communication over a (hardwire or wireless) link, respectively. The interface 1413/1433/1443 may include a suitable transceiver coupled to one or more antennas or communication means for (hardwire or wireless) communications with the linked or connected device(s), respectively. The interface 1413/1433/1443 is generally configured to communicate with at least one other apparatus, i.e. the interface thereof.
The memory 1412/1432/1442 may store respective programs assumed to include program instructions or computer program code that, when executed by the respective processor, enables the respective electronic device or apparatus to operate in accordance with the example embodiments.
In general terms, the respective devices/apparatuses (and/or parts thereof) may represent means for performing respective operations and/or exhibiting respective functionalities, and/or the respective devices (and/or parts thereof) may have functions for performing respective operations and/or exhibiting respective functionalities.
When in the subsequent description it is stated that the processor (or some other means) is configured to perform some function, this is to be construed to be equivalent to a description stating that at least one processor, potentially in cooperation with computer program code stored in the memory of the respective apparatus, is configured to cause the apparatus to perform at least the thus mentioned function. Also, such function is to be construed to be equivalently implementable by specifically configured means for performing the respective function (i.e. the expression “processor configured to [cause the apparatus to] perform xxx-ing” is construed to be equivalent to an expression such as “means for xxx-ing”).
According to example embodiments, an apparatus representing the network node or entity 10 (e.g. managing artificial intelligence or machine learning pipelines in a plurality of network domains including a first network domain in a network) comprises at least one processor 1411, at least one memory 1412 including computer program code, and at least one interface 1413 configured for communication with at least another apparatus. The processor (i.e. the at least one processor 1411, with the at least one memory 1412 and the computer program code) is configured to perform transmitting a first artificial intelligence or machine learning performance related message towards a second network entity managing lifecycles of artificial intelligence or machine learning pipelines in said first network domain in said network (thus the apparatus comprising corresponding means for transmitting), and to perform receiving a second artificial intelligence or machine learning performance related message from said second network entity, wherein said first artificial intelligence or machine learning performance related message comprises a first information element including at least one first cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter (thus the apparatus comprising corresponding means for receiving).
According to example embodiments, an apparatus representing the network node or entity 30 (e.g. ma managing lifecycles of artificial intelligence or machine learning pipelines in a first network domain in a network) comprises at least one processor 1431, at least one memory 1432 including computer program code, and at least one interface 1433 configured for communication with at least another apparatus. The processor (i.e. the at least one processor 1431, with the at least one memory 1432 and the computer program code) is configured to perform receiving a first artificial intelligence or machine learning performance related message from a first network entity managing artificial intelligence or machine learning pipelines in a plurality of network domains including said first network domain in said network (thus the apparatus comprising corresponding means for receiving), and to perform transmitting a second artificial intelligence or machine learning performance related message towards said first network entity, wherein said first artificial intelligence or machine learning performance related message comprises a first information element including at least one first cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter (thus the apparatus comprising corresponding means for transmitting).
According to example embodiments, an apparatus representing the network node or entity 40 (e.g. responsible for fulfillment of network operator specifications in a first network domain in a network) comprises at least one processor 1441, at least one memory 1442 including computer program code, and at least one interface 1443 configured for communication with at least another apparatus. The processor (i.e. the at least one processor 1441, with the at least one memory 1442 and the computer program code) is configured to perform receiving a third artificial intelligence or machine learning performance related message from a first network entity managing artificial intelligence or machine learning pipelines in a plurality of network domains including said first network domain in said network (thus the apparatus comprising corresponding means for receiving), and to perform transmitting a fourth artificial intelligence or machine learning performance related message towards said first network entity, wherein said third artificial intelligence or machine learning performance related message comprises a third information element including at least one third cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter, said third artificial intelligence or machine learning performance related message is a cross-domain performance configuration request, and said fourth artificial intelligence or machine learning performance related message is a cross-domain performance configuration response (thus the apparatus comprising corresponding means for transmitting).
For further details regarding the operability/functionality of the individual apparatuses, reference is made to the above description in connection with any one of
For the purpose of the present disclosure as described herein above, it should be noted that
In general, it is to be noted that respective functional blocks or elements according to above-described aspects can be implemented by any known means, either in hardware and/or software, respectively, if it is only adapted to perform the described functions of the respective parts. The mentioned method steps can be realized in individual functional blocks or by individual devices, or one or more of the method steps can be realized in a single functional block or by a single device.
Generally, any method step is suitable to be implemented as software or by hardware without changing the idea of the present disclosure. Devices and means can be implemented as individual devices, but this does not exclude that they are implemented in a distributed fashion throughout the system, as long as the functionality of the device is preserved. Such and similar principles are to be considered as known to a skilled person.
Software in the sense of the present description comprises software code as such comprising code means or portions or a computer program or a computer program product for performing the respective functions, as well as software (or a computer program or a computer program product) embodied on a tangible medium such as a computer-readable (storage) medium having stored thereon a respective data structure or code means/portions or embodied in a signal or in a chip, potentially during processing thereof.
The present disclosure also covers any conceivable combination of method steps and operations described above, and any conceivable combination of nodes, apparatuses, modules or elements described above, as long as the above-described concepts of methodology and structural arrangement are applicable.
In view of the above, there are provided measures for performance related management of artificial intelligence or machine learning pipelines in cross-domain scenarios. Such measures exemplarily comprise transmitting a first artificial intelligence or machine learning performance related message towards a second network entity managing lifecycles of artificial intelligence or machine learning pipelines in a first network domain in a network, and receiving a second artificial intelligence or machine learning performance related message from said second network entity, wherein said first artificial intelligence or machine learning performance related message comprises a first information element including at least one first cross-domain network service involved artificial intelligence or machine learning pipeline performance related parameter.
Even though the disclosure is described above with reference to the examples according to the accompanying drawings, it is to be understood that the disclosure is not restricted thereto. Rather, it is apparent to those skilled in the art that the present disclosure can be modified in many ways without departing from the scope of the inventive idea as disclosed herein.
| Filing Document | Filing Date | Country | Kind |
|---|---|---|---|
| PCT/EP2022/055685 | 3/7/2022 | WO |