The examples and non-limiting embodiments relate generally to communications and, more particularly, to optimization of gNB failure detection and fast activation of fallback mechanism.
It is known to implement a backup system in a communication network to prevent service disruption.
In accordance with an aspect, a method includes receiving an indication to create a notification publish space to monitor failure, from a central entity of an access network node, the notification publish space comprising an identifier of the central entity of the access network node being monitored for failure; creating the notification publish space, and sending an acknowledgement of the indication to create the notification publish space to the central entity of the access network node; receiving a subscription to the notification publish space from at least one logical entity of the access network node or of another access network node; receiving a failure notification of a failure of the at least one logical entity being monitored for failure; and notifying the subscribers of the notification publish space concerning the failure of the at least one logical entity.
In accordance with an aspect, a method includes transmitting an indication to create a notification publish space to a data storage function, the notification publish space comprising an identifier of a central entity of an access network node being monitored for failure; receiving an acknowledgement of the indication to create the notification publish space from the data storage function; and transmitting the identifier of the central entity and associated publish space information to at least one logical entity of the access network node or of another access network node being monitored for failure; wherein the identifier of the central entity is configured to be used with the at least one logical entity to subscribe to the notification publish space to receive information concerning a failure of the at least one logical entity of the access network node or of the another access network node.
In accordance with an aspect, a method includes receiving a identifier from a central entity of an access network node, the identifier used to identify a notification publish space of a radio access network data storage function; subscribing to the notification publish space of the radio access network data storage function using the identifier of the central entity being monitored for failure, the notification publish space used to provide or receive information concerning a failure of at least one logical entity of the access network node or of another access network node being monitored for failure; and receiving a notification of failure of the at least one logical entity with the notification publish space of the radio access network data storage function, the notification of failure comprising an identifier of the failed at least one logical entity.
In accordance with an aspect, a method includes detecting a failure of at least one logical entity of an access network node being monitored for failure; and transmitting a notification to a radio access network data storage function of the failure of the at least one logical entity, the notification comprising an identifier of the failed at least one logical entity; wherein the notification is configured to be used with the radio access network data storage function to notify subscribers of a notification publish space concerning the failure of the at least one logical entity; wherein the notification publish space is accessible to the subscribers of the notification publish space to be notified of the failure.
In accordance with an aspect, a method includes creating an associated node list, the associated node list configured to be used for a notification of failure of at least one logical entity, wherein the notification of failure is performed using at last one point to point interface in a radio access network; wherein the associated node list is created and updated based on interface establishment between a plurality of logical entities of the at least one logical entity and/or a node configuration update procedure; and performing at least: receiving a failure notification of the at least one logical entity from a detecting logical entity that detected the failure, the failure notification including an identifier of the failed at least one logical entity, and transmitting the notification of failure of at least one logical entity using the associated node list and the identifier; detecting the failure of the at least one logical entity, and transmitting the notification of failure of at least one logical entity using the associated node list and the identifier; or failing of a central unit control plane entity, wherein: a failure notification of the failing central unit control plane entity is transmitted to a standby entity from a near real time radio intelligent controller having an inactive interface established with the standby entity, or from the at least one logical entity having an inactive interface established with the standby entity, where the stand by entity transmits the notification of failure using the associated node list and the identifier to a non-failing at least one logical entity, after the at least one logical entity has detected the failure; or the notification of failure is transmitted from the near real time radio intelligent controller to the non-failing at least one logical entity with use of the associated node list, after the near real time radio intelligent controller has detected the failure or after the at least one logical entity has detected the failure and has notified the near real time radio intelligent controller; wherein the at least one logical entity, the plurality of logical entities, the central unit control plane entity, the standby entity, and the non-failing at least one logical entity are entities of at least one access network node.
In accordance with an aspect, a method includes establishing an interface with at least one logical entity; and detecting a failure of the at least one logical entity and transmitting a failure notification of the at least one logical entity, or receiving a notification of failure of the at least one logical entity; wherein the notification of failure is received using an associated node list, the associated node list having been created and updated based on interface establishment between a plurality of logical entities of the at least one logical entity and/or a node configuration update procedure; wherein the at least one logical entity and the plurality of logical entities are entities of at least one access network node.
In accordance with an aspect, a method includes receiving an associated node list from a central unit control plane entity, the associated node list having been created based on interface establishment between a plurality of logical entities of at least one logical entity and/or a node configuration update procedure; storing the associated node list, wherein the associated node list configured to be used for a notification of failure of the at least one logical entity; detecting the failure of the at least one logical entity; and performing either: transmitting a failure notification to a standby central unit control plane entity, wherein the standby central unit control plane entity transmits the notification of failure using the associated node list, and transmitting the failure notification to the central unit control plane entity in response to the failure of the at least one logical entity being attributed to a distributed unit or a central unit user plane entity; or transmitting the notification of failure to a set of the at least one logical entity using the associated node list; wherein the associated node list is stored with a near real time radio intelligent controller; wherein the at least one logical entity, the plurality of logical entities, the central unit control plane entity, and the standby central unit control plane entity are entities of at least one access network node.
In accordance with an aspect, a method includes synchronizing an associated node list between a central unit control plane entity and a standby central unit control plane entity, the associated node list configured to be used for transmission of a notification of failure; storing the associated node list; wherein the associated node list is created based on interface establishment between a plurality of logical entities of at least one logical entity; receiving a failure notification from a near real time radio intelligent controller or the at least one logical entity; and transmitting the notification of failure to the at least one logical entity using the associated node list; wherein the at least one logical entity, the plurality of logical entities, the central unit control plane entity, and the standby central unit control plane entity are entities of at least one access network node.
The foregoing aspects and other features are explained in the following description, taken in connection with the accompanying drawings.
Turning to
In
The PAN node 170 in this example is a base station that provides access by wireless devices such as the UE 110 to the wireless network 100. The PAN node 170 may be, for instance, a base station for 5G, also called New Radio (NR). The RAN node 170 may be, for instance, a base station for beyond 5G, e.g., 6G. In 5G, the PAN node 170 may be a NG-RAN node, which is defined as either a gNB or an ng-eNB. The gNB 170 is a node providing NR user plane and control plane protocol terminations towards the UE, and connected via an O1 interface 131 to the network element(s) 190. The ng-eNB is a node providing E-UTRA user plane and control plane protocol terminations towards the UE, and connected via the NG interface 131 to the 5GC. The NG-PAN node may include multiple gNBs, which may also include a central unit (CU) (gNB-CU) 196 and distributed unit(s) (DUs) (gNB-DUs), of which DU 195 is shown. Note that the DU 195 may include or be coupled to and control a radio unit (RU). The gNB-CU 196 is a logical node hosting RRC, SDAP and PDCP protocols of the gNB or RRC and PDCP protocols of the en-gNB that control the operation of one or more gNB-DUs 195. The gNB-CU 196 terminates the F1 interface connected with the gNB-DU 195. The F1 interface is illustrated as reference 198, although reference 198 also illustrates connection between remote elements of the RAN node 170 and centralized elements of the PAN node 170, such as between the gNB-CU 196 and the gNB-DU 195. The gNB-DU 195 is a logical node hosting RLC, MAC and PHY layers of the gNB or en-gNB, and its operation is partly controlled by gNB-CU 196. One gNB-CU 196 supports one or multiple cells. One cell is typically supported by only one gNB-DU 195. The gNB-DU 195 terminates the F1 interface 198 connected with the gNB-CU 196. Note that the DU 195 is considered to include the transceiver 160, e.g., as part of an RU, but some examples of this may have the transceiver 160 as part of a separate RU, e.g., under control of and connected to the DU 195. The RAN node 170 may also be an eNB (evolved NodeB) base station, for LTE (long term evolution), or any other suitable base station or node.
The PAN node 170 includes one or more processors 152, one or more memories 155, one or more network interfaces (N/W I/F(s)) 161, and one or more transceivers 160 interconnected through one or more buses 157. Each of the one or more transceivers 160 includes a receiver, Rx, 162 and a transmitter, Tx, 163. The one or more transceivers 160 are connected to one or more antennas 158. The one or more memory(ies) 155 include computer program code 153. The CU 196 may include the processor(s) 152, memories 155, and network interfaces 161. Note that the DU 195 may also contain its own memory/memories and processor(s), and/or other hardware, but these are not shown.
The RAN node 170 includes a module 156, also referred to as radio intelligent controller herein, module 156, comprising one of or both parts 156-1 and/or 1560-2, which may be implemented in a number of ways. The module 156 may be implemented in hardware as 156-1, such as being implemented as part of the one or more processors 152. The module 156-1 may be implemented also as an integrated circuit or through other hardware such as a programmable gate array. In another example, the 156 may be implemented as module 156-2, which is implemented as computer program code 153 and is executed by the one or more processors 152. For instance, the one or more memories 155 and the computer program code 153 are configured to, with the one or more processors 152, cause the PAN node 170 to perform one or more of the operations as described herein. Note that the functionality of the module 156 may be distributed, such as being distributed between the DU 195 and the CU 196, or be implemented solely in the DU 195. In some embodiments, the module 156 can be a RIC module, e.g., near-RT RIC.
The one or more network interfaces 161 communicate over a network such as via the links 176 and 131. Two or more gNBs 170 communicate using, e.g., link 176. The link 176 may be wired or wireless or both and may implement, e.g., an Xn interface for 5G, an X2 interface for LTE, or other suitable interface for other standards, e.g., interfaces that may be specified for beyond 5G system, for example, 6G.
The one or more buses 157 may be address, data, or control buses, and may include any interconnection mechanism, such as a series of lines on a motherboard or integrated circuit, fiber optics or other optical communication equipment, wireless channels, and the like. For example, the one or more transceivers 160 may be implemented as a remote radio head (RRH) 195 for LTE or a distributed unit (DU) 195 for gNB implementation for 5G, with the other elements of the RAN node 170 possibly being physically in a different location from the RRH/DU 195, and the one or more buses 157 could be implemented in part as, e.g., fiber optic cable or other suitable network connection to connect the other elements (e.g., a central unit (CU), gNB-CU 196) of the RAN node 170 to the RRH/DU 195. Reference 198 also indicates those suitable network connection(s).
It is noted that description herein indicates that “cells” perform functions, but it should be clear that equipment which forms the cell may perform the functions. The cell makes up part of a base station. That is, there can be multiple cells per base station. For example, there could be three cells for a single carrier frequency and associated bandwidth, each cell covering one-third of a 360 degree area so that the single base station's coverage area covers an approximate oval or circle. Furthermore, each cell can correspond to a single carrier and a base station may use multiple carriers. So if there are three 120 degree cells per carrier and two carriers, then the base station has a total of 6 cells.
The wireless network 100 may include a network element (NE) (or elements, NE(s)) 190 that may implement SMO/OAM functionality, and that is connected via a link or links 181 with a further network, such as a telephone network and/or a data communications network (e.g., the Internet). The PAN node 170 is coupled via a link 131 to the network element 190. The link 131 may be implemented as, e.g., an O1 interface for SMO/OAM, or other suitable interface for other standards. The network element 190 includes one or more processors 175, one or more memories 171, and one or more network interfaces (N/W I/F(s)) 180, interconnected through one or more buses 185. The one or more memories 171 include computer program code (CPC) 173. The one or more memories 171 and the computer program code 173 are configured to, with the one or more processors 175, cause the network element 190 to perform one or more operations. The network element 190 includes a RIC module 140, comprising one of or both parts 140-1 and/or 140-2, which may be implemented in a number of ways. The RIC module 140 may be implemented in hardware as RIC module 140-1, such as being implemented as part of the one or more processors 175. The RIC module 140-1 may be implemented also as an integrated circuit or through other hardware such as a programmable gate array. In another example, the RIC module 140 may be implemented as RIC module 140-2, which is implemented as computer program code 173 and is executed by the one or more processors 175. In some examples, a single RIC could serve a large region covered by hundreds of base stations. The network element(s) 190 may be one or more network control elements (NCEs).
The wireless network 100 may include a network element or elements 189 that may include core network functionality, and which provides connectivity via a link or links 191 with a further network, such as a telephone network and/or a data communications network (e.g., the Internet). Such core network functionality for 5G may include location management functions (LMF(s)) and/or access and mobility management function(s) (AMF(S)) and/or user plane functions (UPF(s)) and/or session management function(s) (SMF(s)). Such core network functionality for LTE may include MME (Mobility Management Entity)/SGW (Serving Gateway) functionality. Such core network functionality may include SON (self-organizing/optimizing network) functionality. These are merely example functions that may be supported by the network element(s) 189, and note that both 5G and LTE functions might be supported. The PAN node 170 is coupled via a link 187 to the network element 189. The link 187 may be implemented as, e.g., an NG interface for 5G, or an S1 interface for LTE, or other suitable interface for other standards. The network element 189 includes one or more processors 172, one or more memories 177, and one or more network interfaces (N/W I/F(s)) 174, interconnected through one or more buses 192. The one or more memories 177 include computer program code 179.
The wireless network 100 may implement network virtualization, which is the process of combining hardware and software network resources and network functionality into a single, software-based administrative entity, a virtual network. Network virtualization involves platform virtualization, often combined with resource virtualization. Network virtualization is categorized as either external, combining many networks, or parts of networks, into a virtual unit, or internal, providing network-like functionality to software containers on a single system. Note that the virtualized entities that result from the network virtualization are still implemented, at some level, using hardware such as processors 152 or 175 or 172 and memories 155 and 171 and 177, and also such virtualized entities create technical effects.
The computer readable memories 125, 155, 171, and 177 may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor based memory devices, flash memory, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory. The computer readable memories 125, 155, 171, and 177 may be means for performing storage functions. The processors 120, 152, 175, and 172 may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs) and processors based on a multi-core processor architecture, as non-limiting examples. The processors 120, 152, 175, and 172 may be means for performing functions, such as controlling the UE 110, PAN node 170, network element(s) 190, network element(s) 189, and other functions as described herein.
In general, the various embodiments of the user equipment 110 can include, but are not limited to, cellular telephones such as smart phones, tablets, personal digital assistants (PDAs) having wireless communication capabilities, portable computers having wireless communication capabilities, image capture devices such as digital cameras having wireless communication capabilities, gaming devices having wireless communication capabilities, music storage and playback appliances having wireless communication capabilities, Internet appliances permitting wireless Internet access and browsing, tablets with wireless communication capabilities, as well as portable units or terminals that incorporate combinations of such functions. The UE 100 may also be a head mounted display that supports virtual reality, augmented reality, or mixed reality.
Possible configurations are shown of RICs known as a near-real time (near-RT) RIC 210 and a non-RT RIC 220 in
One possible instantiation of RIC non-RT 220 and RIC near-RT 210 is these are entities separate from the PAN node 170. This is illustrated by
However it is also possible that the RIC near-RT 210 functionality may be a part of the PAN node 170, in a couple of cases:
The edge cloud 250 may be viewed as a “hosting location”, e.g., a kind of data center. Multiple elements may be hosted there, such as the CU, RIC, and yet other functions like MEC (mobile edge computing) platforms, and the like.
In the example of
It is also possible the RIC near-RT 210 may be located at an edge cloud, at some relatively small latency from the RAN node (such as 30-100 ms), while the RIC non-RT 220 may be at a greater latency likely in a centralized cloud. This is illustrated by
Accordingly, UE 110, PAN node 170, network element(s) 190, network element(s) 189 (and associated memories, computer program code and modules), edge cloud 250, centralized cloud 260, and/or the RIC near-RT module 210 may be configured to implement the methods described herein, including optimization of gNB failure detection and fast activation of fallback mechanism.
Having thus introduced suitable but non-limiting technical contexts for the practice of the exemplary embodiments described herein, the exemplary embodiments are now described with greater specificity.
The examples described herein include both 3GPP and O-PAN aspects. 3GPP aspects are related to a beyond 5G/6G service-based PAN architecture.
Resiliency in PAN is an important aspect in providing service continuity and avoiding downtime. Specifically, gNB-CU-CP (Central Unit-Control Plane) resiliency can be vital for UE service continuity after failure. Various examples and embodiments described herein can utilize gNB-CU resiliency based on an inactive SCTP connection to standby CU-CPs.
Each gNB logical entity currently may detect a failure on its own based on timer expiries, which can be long to avoid false detection. An important aspect in this regard is optimization of failure detection times by using a collaborative approach among connected gNB logical entities to initiate fallback mechanisms faster. Regarding this, the examples described herein provide a solution that can be applied in the current PAN architecture with point-to-point (P2P) interfaces as well as in the SB-RAN (service based-RAN) architecture. The examples described herein also consider implications in the O-RAN environment.
The examples described herein can relate to the 3GPP, O-RAN, and other related standardizations.
Mobile and wireless communications networks are increasingly deployed in cloud environments. Furthermore, 5G and new generations beyond 5G are aimed to be flexible by adding new functionalities into the system capitalizing on the cloud implementations. To this end, as shown in
In the 5GC SBA, a consumer inquires a network repository function (NRF) in order to discover an appropriate service producer entity. That is, in 5GC in order to discover and select the appropriate service entities, multiple filtering criteria may be applied by the NRF.
5GC SBA Application Programming Interfaces (APIs) are based on the HTTP(S) protocol. A Network Function (NF) service is one type of capability exposed by an NF (NF service producer entity) to another authorized NF (NF service consumer entity) through a service-based interface (SBI). A Network Function (NF) may expose one or more NF services. NF services may communicate directly between NF service consumer entities and NF service producer entities, or indirectly via a Service Communication Proxy (SCP).
However, the Access Network (AN), e.g., Radio AN (RAN) 170, and the associated interfaces, e.g., within the AN, among ANs and between the AN and Core Network (CN) 201 are defined as legacy P2P interfaces since the very early generations of PLMN. For example, in the 5G System (5GS), N2 246 is designed as a 3GPP NG-C Application Protocol over SCTP, between the gNB 170 (or ng-eNB) and the AMF 238 (Access and Mobility management Function). Further P2P interface examples within the AN are the Xn interface (e.g. item 176 of
An access network (AN) can be defined as a network that offers access (such as radio access) to one or more core networks, and that is enabled to connect subscribers to the one or more core networks. The access network may provide 3GPP access such as GSM/EDGE, UTRA, E-UTRA, or NR access or non-3GPP access such as WLAN/Wi-Fi. The access network is contrasted with the core network, which is an architectural term relating to the part of the network (e.g. 3GPP network) which is independent of the connection technology of the terminal (e.g. radio, wired) and which provides core network services such as subscriber authentication, user registration, connectivity to packet data networks, subscription management, etc. An access network and a core network may correspond respectively e.g. to a 3GPP access network and 3GPP core network.
Herein, an entity can be, e.g., a logical entity, an access node, a base station, a part of an access node or base station, a protocol stack, a part of a protocol stack, a network function, a part of a network function, or the like.
Application of SBA principles to the (R)AN may imply substantial updates to the mobile and wireless communication networks and, thus, various aspects may be considered to be realized in the next generations beyond 5G.
As further shown in
The N1 interface 244 connects the UE 110 to the AMF 238, the N3 interface 252 connects the RAN node 170 to the UPF 254, which UPF 254 is coupled to the SMF 240 via the N4 248 interface. UPF 254 is coupled to the DN 262 via the N6 interface 258. Further the N9 interface 256 connects items within UPF 254 to each other, or the N9 interface 256 is an interface between different UPFs.
As further shown in
It is to be noted that, in the present disclosure, a service-based configuration, architecture or framework can encompass a microservice configuration, architecture or framework. That is, a service-based (R)AN according to at least one exemplifying embodiment may be based on or comprise a microservice approach such that one or more network functions or one or more services within one or more network functions or one or more functionalities/mechanisms/processes of services of one or more network functions represent or comprise a set/collection of interacting microservices. Accordingly, in a service-based (R)AN according to at least one exemplifying embodiment, a service may be produced or provided by any one of a network function, a microservice, a communication control entity or a cell.
Microservices can be understood as more modular services (as compared with services produced/provided by NFs) that come together to provide a meaningful service/application. In this scope, one can deploy and scale the small modules flexibly (e.g. within a NF or between various NFs). For example, a NF provides a service, and a microservice can represent small modules that make up the service. When a service is clogged at a specific module, then one can scale the individual module/s in microservice scope instead of the whole service as it would happen in network function scope. In microservice scope, energy saving according to at least one exemplifying embodiment would work the other way around, namely there is no need for a specific module to operate the service anymore, so the individual microservice would be shut down or deactivated.
Near-RT RIC 310 (refer to
In the event of E2 332 or Near-RT RIC failure 310, the E2 Node 334 is able to provide services but with the caveat that there can be an outage for value-added services that may only be provided using the Near-RT RIC 310 (e.g. via the xApps 326). Failures of the RIC, such as item 310, are detected based on service response timer expiries, data transmission over connection timer expiries, etc. The data transmission over connection timer expiries refer to transport layer-related timer expiries, whereas service response timer expiries relate to application-/procedure-related timer expiries.
As further shown in
As further shown in
Each gNB logical entity/E2 node may detect a failure on its own based on timer expiries. Generally, the failure detection involves long timers to avoid false detection.
In O-RAN, in the event of failure, the E2 node 334 may have to wait unnecessarily long to execute a subsequent action, causing service disruptions in the range of milliseconds as well as seconds (e.g. 60 s is also mentioned as a possible value in the O-RAN specifications). For example, there may be a combined service subscription, such as a REPORT service disruption followed by a POLICY service disruption. Accordingly, the E2 node 334 reports the necessary input data (e.g.: PM counters, traces, KPIs, signaling messages etc.) based on which the RIC (e.g. 310) may prepare/change policy. If a Near-RT RIC 310 failure occurs before receiving the POLICY, the aforementioned service disruption can occur. A UE specific INSERT/CONTROL mechanism may not be preferable over the E2 interface 332, where the issue is much more prominent due to the fact that a RIC failure while waiting for response of the INSERT procedure may cause an RLF of the UEs. Even if the E2 interface 332 is limited to a REPORT/POLICY mechanism (which may be preferred), the non-real time nature of the procedures may mean that detection of RIC failure may not happen simultaneously at all E2 nodes. It is also sub-optimal to perform failure detection separately at each E2 node (such as E2 node 334) with long undue wait times.
Such a discrete and individual failure detection is also a problem in case of a gNB-CU-CP failure (refer to item 860 of
Such failure detection framework also implies the following: there is currently no mechanism to notify the associated gNB logical entity/E2 node 334 of an already detected failure. Therefore the failure detection times are not optimized and fallback mechanisms cannot be kicked in faster. Associated entities are defined as entities among which a direct C-plane or U-plane interface is established.
Various examples and embodiments described herein address the resiliency and robustness of a gNB (e.g. PAN node 170) by optimizing the failure detection times and fast fallback mechanism activation. They propose a respective solution applicable in a PAN, SB-RAN and O-RAN environment by making use of the relations among gNBs and/or gNB entities. By doing so, the examples described herein address a technical gap toward the realization of PAN resiliency.
Various examples and embodiments described herein provide a solution to optimize the duration of service disruptions and activation of fallback mechanisms in a gNB and/or logical entities of an NG-RAN node in the following cases, 1-2 (also considering the O-RAN environment implications): 1. Notification of failure of NG-RAN logical entities (e.g., gNB-CU-CP, DU, CU-UP) (or E2 Node in O-RAN) to all associated NG-RAN entities (or E2 Node and Near-RT RIC in O-RAN) for activation of fallback/recovery actions; and 2. Notification of Near-RT RIC failure to the associated E2 nodes in an O-RAN environment to activate the default fallback mechanism without having to perform failure detection on their own.
Associated NG-RAN node entities can be defined as those with which a direct C-plane or U-plane interface is established.
The below embodiments are described to realize the herein described solution: create and store a list of NG-RAN logical entities, based on their unique IDs, that are associated to each other via F1/E1/Xn/NG/X2 interfaces. Create and store a list of E2 Nodes, based on their unique IDs, that are associated to a Near-RT RIC via an E2 interface. Upon failure detection of a Near-RT RIC by an E2 Node or failure detection of NG-RAN node logical entity by another NG-RAN node logical entity/Near-RT RIC, the node that detects the failure uses the list to notify the entities or a subset of the entities in the created list so that the respective entities can initiate their fallback mechanisms earlier than detecting the failure themselves, which can take a long time depending on service configurations (in the range of milliseconds, seconds, minutes, etc.).
Two solution alternatives are described herein for creating/storing/updating and broadcasting of failure notification, considering current peer-to-peer (P2P) interfaces and the novel service-based RAN (SB-RAN) architecture (1-2 immediately below with various example and embodiments).
1. Notification based on the SB-RAN architecture and principles (refer to
Although some of the examples described herein depict and describe the PAN-DSF as a single NF, the PAN-DSF may be implemented as part of a data storage architecture that may include one or more elements (e.g., functions or nodes). For example, a PAN data storage architecture may include a RAN-DSF, a data repository, and/or a data management entity. Moreover, different deployment options may be implemented, where the elements may be collocated. Furthermore, the elements of the data storage architecture may perform storage and retrieval of data, such as UE context information.
In some example embodiments, there is provided a data storage function (DSF) having a service-based interface (SBI). In some example embodiments, the DSF is a (R)AN element (function, or node), in which case it is referred to as a (R)AN-DSF. The (R)AN DSF may be used to retrieve (e.g., fetch), store, and update a notification publish space. These operations may be performed by any authorized network function (NF), such as a source gNB base station, a target gNB base station, Near-RT RIC, and/or other network functions or entities in the (R)AN and/or core. The DSF may be accessed by an authorized central entity to create a notification publish space. Moreover, the notification publish space at the DSF may be accessed for updating in case of an event occurrence requiring an update on notification publish space or for retrieving in case of an event occurrence requiring the fetching of notification publish space. The DSF may provide notification publish space storage, update, fetch and any other operation that may provide efficient handling of monitoring and notifying the failure of a network entity in the network.
In some example embodiments, there is provided a data analytics function (DAF) having a service-based interface (SBI). In some example embodiments, the DAF is a (R)AN element (function, or node), in which case it is referred to as a (R)AN-DAF. The (R)AN DAF may be used to collect and analyze data that may be useful for monitoring/detecting/predicting the operational state of the network entities for a failure as well as notify the respective entities about a potential or detected failure. Said data can be collected from a network function that provides storage of such data, such (R)AN-DSF. Monitoring, detecting, and predicting the network entity state can performed via any mechanism which can be based on server timer expiries, transport layer-related timer expiries, AI/ML methods, or any other mechanism that provides the failure detection/prediction functionality. The detected/predicted failure at the (R)AN-DAF can be notified to the respective entity in the network that is responsible for notifying the all the network entities potentially affected by the failure. Such respective entity can be (R)AN-DSF.
As further shown in
2. Notification via P2P interfaces in the current NG-RAN architecture (refer to
As further shown in
In the context of
The central unit control plane entity may receive an indication of the distributed unit changing to the central unit user plane entity or the distributed unit adding the central unit user plane entity, and/or an indication of a second central unit control plane entity initiating a change to a third central unit control plane entity, or the central unit control plane entity releasing the established interface or a changing of an access and mobility management function.
The central unit control plane entity may update the associated node list with the distributed unit changing to the central unit user plane entity or the distributed unit adding the central unit user plane entity, or the second central unit control plane entity initiating the change to the third central unit control plane entity, or the central unit control plane entity releasing the established interface or the changing of the access and mobility management function.
Details on the above-mentioned solution are provided further herein, with reference to
The failure can relate to one or more xApps in the Near-RT RIC 610 as well (e.g. 326 of near-RT RIC 310 of
Additionally, the notified entities can be filtered depending on the failed E2 Node type. For example if a (O-)CU-CP (660, 668) failure is detected, all the (O-)CU-UPs (662, 670), (O-)DUs (664, 666, 672, 674) that consume the failed (O-)CU-(O-)CPs (660, 668) services can be notified. However if a (O-)DU (664, 666, 672, 674) failure is detected, this notification can be narrowed down to the serving (O-)CU-CP (660 or 668) and (O-)CU-UP (662 or 670) as well as any other (O-)CU-CP (660 or 668) and (O-)CU-UP (662 or 670) (in case of EN-DC/NR-DC) that are affected by the failure but not the other (O-)DUs (664, 666, 672, or 674) that are served by the same (O-)CU-CP (660 or 668).
This can save signaling latency and payload.
Failure detection (714-a-2, 714-b-2, 714-c-2) can be done in various ways (service response timer expiries, data transmission over connection timer expiries, (AI/ML) mechanisms indicating probability of failure at a given time or time period, etc.) and additional mechanisms can be integrated to avoid false failure detection (multiple reports from one or more entities, AI/ML models, etc.)
The failure can relate to one or more xApps in Near-RT RIC 710 as well (e.g. such as item 326 shown in
Additionally, the notified entities can be filtered depending on the failed E2 Node type. For example if a (O-)CU-CP 760 failure is detected, all the (O-)CU-CPs, (O-)CU-UPs 762, (O-)DUs (764, 766) that are connected via E1/F1/Xn interfaces can be notified. However if a (O-)DU (764 or 766) failure is detected, this notification can be narrowed down to the serving (O-)CU-CP 760 and (O-)CU-UP 762 as well as any other (O-)CU-CP 760 and (O-)CU-UP 762 (in case of EN-DC/NR-DC) that are affected by the failure but not the other (O-)DUs (764 or 766) that are served by the same (O-)CU-CP 760.
The herein described SB-RAN solution may not rely on the Near-RT RIC as the central node. The role of central node is left as a new standalone function with both 3GPP NFs (DU, CU-CP, CU-UP, eNB, etc.) and ORAN Near-RT RIC all treated as just NFs that may fail. That is, the SB-RAN based solution does not rely on the Near-RT RIC or any central entity, since the failure detection could be performed by any eligible node and the notification is shared by the RAN-DSF. This standalone function could be also integrated into the CU-CP, or the Near-RT RIC, or the AMF.
In the NG-RAN with P2P interfaces solution, only the broadcast notification is coming from the central entity (i.e. the stand-by CU-CP). If the mesh of network interfaces was relied upon, then new C-plane interfaces may need to be introduced where none exists. For e.g.: A DU may detect CU-CP failure, but it does not have interfaces to the rest of the DUs or unconnected CU-UPs. Hence the broadcast is relayed via a stand-by CU-CP, who has C-plane interface with every other entity.
The examples herein describe the resilient and robust operation of a gNB with and without SB-RAN considerations as well as in O-RAN environments.
As further shown in
The apparatus 1100 optionally includes a display and/or I/O interface 1108 that may be used to display aspects or a status of the methods described herein (e.g., as one of the methods is being performed or at a subsequent time), or to receive input from a user such as with using a keypad. The apparatus 1100 includes one or more network (N/W) interfaces (I/F(s)) 1110. The N/W I/F(s) 1110 may be wired and/or wireless and communicate over the Internet/other network(s) via any communication technique. The N/W I/F(s) 1110 may comprise one or more transmitters and one or more receivers. The N/W I/F(s) 1110 may comprise standard well-known components such as an amplifier, filter, frequency-converter, (de)modulator, and encoder/decoder circuitries and one or more antennas.
The apparatus 1100 to implement the functionality of control 116 may be UE 110, PAN node 170, network element(s) 190, network element(s) 189 or any of the apparatuses depicted in
Thus, processor 1102 may correspond respectively to processor(s) 120, processor(s) 152, processor(s) 175, and/or processor(s) 172, memory 1104 may correspond respectively to memory(ies) 125, memory(ies) 155, memory(ies) 171, and/or memory(ies) 177, computer program code 1105 may correspond respectively to computer program code 123, module 121-1, module 121-2, and/or computer program code 153, module 156-1, module 156-2, RIC module 150-1, RIC module 150-2, computer program code 173, RIC module 140-1, RIC module 140-2, and/or computer program code 179, and N/W I/F(s) 1110 may correspond respectively to transceiver 130, N/W I/F(s) 161, N/W I/F(s) 180, and/or N/W I/F(s) 174.
Alternatively, apparatus 1100 may not correspond to either of UE 110, PAN node 170, network element(s) 190, or network element(s) 189 as apparatus 1100 may be part of a self-organizing/optimizing network (SON) node, such as in a cloud. The apparatus 1100 may also be distributed throughout the network 100 including within and between apparatus 1100 and any network element (such as a network control element (NCE) 190 and/or network element(s) 189 and/or the PAN node 170 and/or the UE 110).
Interface 1112 enables data communication between the various items of apparatus 1100, as shown in
Apparatus 1100 may function as a 3GPP node (UE, base station e.g. eNB or gNB, network element) or as an O-RAN node (UE, disaggregated eNB or gNB, or network element).
References to a ‘computer’, ‘processor’, etc. should be understood to encompass not only computers having different architectures such as single/multi-processor architectures and sequential or parallel architectures but also specialized circuits such as field-programmable gate arrays (FPGAs), application specific circuits (ASICs), signal processing devices and other processing circuitry. References to computer program, instructions, code etc. should be understood to encompass software for a programmable processor or firmware such as, for example, the programmable content of a hardware device whether instructions for a processor, or configuration settings for a fixed-function device, gate array or programmable logic device etc.
The memory(ies) as described herein may be implemented using any suitable data storage technology, such as semiconductor based memory devices, flash memory, magnetic memory devices and systems, optical memory devices and systems, non-transitory memory, transitory memory, fixed memory and removable memory. The memory(ies) may comprise a database for storing data.
As used herein, the term ‘circuitry’ may refer to the following: (a) hardware circuit implementations, such as implementations in analog and/or digital circuitry, and (b) combinations of circuits and software (and/or firmware), such as (as applicable): (i) a combination of processor(s) or (ii) portions of processor(s)/software including digital signal processor(s), software, and memory(ies) that work together to cause an apparatus to perform various functions, and (c) circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present. As a further example, as used herein, the term ‘circuitry’ would also cover an implementation of merely a processor (or multiple processors) or a portion of a processor and its (or their) accompanying software and/or firmware. The term ‘circuitry’ would also cover, for example and if applicable to the particular element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in a server, a cellular network device, or another network device.
The following examples 1 to 160 are provided and described, which are based on the example embodiments described herein.
Example 1: An example method includes receiving an indication to create a notification publish space to monitor failure, from a central entity of an access network node, the notification publish space comprising an identifier of the central entity of the access network node being monitored for failure; creating the notification publish space, and sending an acknowledgement of the indication to create the notification publish space to the central entity of the access network node; receiving a subscription to the notification publish space from at least one logical entity of the access network node or of another access network node; receiving a failure notification of a failure of the at least one logical entity being monitored for failure; and notifying the subscribers of the notification publish space concerning the failure of the at least one logical entity.
Example 2: The method of example 1, wherein the failure notification of the failure comprises an identifier of the failed at least one logical entity.
Example 3: The method of any of examples 1 to 2, wherein the notifying the subscribers of the notification publish space concerning the failure of the at least one logical entity comprises transmitting an identifier of the failed at least one logical entity to the subscribers of the notification publish space.
Example 4: The method of any of examples 1 to 3, wherein the at least one logical entity subscribes to the notification publish space in response to having received an identifier of the central entity and associated publish space information.
Example 5: The method of any of examples 1 to 4, further comprising updating a publish space list with information concerning the failure of the at least one logical entity.
Example 6: The method of any of examples 1 to 5, wherein the notification publish space is created with a data storage function.
Example 7: The method of any of examples 1 to 6, further comprising detecting the failure of the at least one logical entity.
Example 8: The method of any of examples 1 to 7, wherein the failed at least one logical entity comprises one or more services of a near real time radio intelligent controller.
Example 9: The method of example 8, wherein the notifying the subscribers of the notification publish space concerning the failure of the at least one logical entity comprises providing information concerning the one or more services of the near real time radio intelligent controller.
Example 10: The method of any of examples 1 to 9, wherein the failed at least one logical entity comprises one or more services of the at least one logical entity.
Example 11: The method of example 10, wherein the notifying the subscribers of the notification publish space concerning the failure of the at least one logical entity comprises providing information concerning the failure of the one or more services of the at least one logical entity, or providing information concerning the at least one logical entity.
Example 12: The method of any of examples 10 to 11, wherein the at least one logical entity comprises a distributed unit, a central unit user plane entity, or a central unit control plane entity.
Example 13: The method of any of examples 1 to 12, further comprising filtering the at least one logical entity prior to notifying the notification publish space concerning the failure of the at least one logical entity, such that a first subset of the at least one logical entity receives the notification of the failure, and a second subset of the at least one logical entity does not receive the notification of the failure due to not being affected with the failure.
Example 14: The method of any of examples 1 to 13, wherein the central entity comprises either a central unit control plane entity or a near real time radio intelligent controller.
Example 15: The method of any of examples 1 to 14, wherein the at least one logical entity, including the failed at least one logical entity, comprises: a central unit control plane entity; a central unit user plane entity; a distributed unit; or a near real time radio intelligent controller.
Example 16: An example method includes transmitting an indication to create a notification publish space to a data storage function, the notification publish space comprising an identifier of a central entity of an access network node being monitored for failure; receiving an acknowledgement of the indication to create the notification publish space from the data storage function; and transmitting the identifier of the central entity and associated publish space information to at least one logical entity of the access network node or of another access network node being monitored for failure; wherein the identifier of the central entity is configured to be used with the at least one logical entity to subscribe to the notification publish space to receive information concerning a failure of the at least one logical entity of the access network node or of the another access network node.
Example 17: The method of example 16, wherein the central entity comprises either a central unit control plane entity or a near real time radio intelligent controller.
Example 18: The method of any of examples 16 to 17, further comprising: detecting the failure of the at least one logical entity of the access network node or of the another access network node; and notifying a data storage function of the failure, the notifying comprising including an identifier of the failed at least one logical entity.
Example 19: The method of example 18, wherein detecting the failure is performed with at least one of: at least one service response timer expiry; at least one transport network failure detection timer expiry; or an artificial intelligence or machine learning method indicating a probability of failure at a given time or time period.
Example 20: The method of any of examples 18 to 19, further comprising filtering the at least one logical entity prior to notifying the data storage function of the failure of the at least one logical entity, such that a first subset of the at least one logical entity receives a failure notification, and a second subset of the at least one logical entity does not receive the failure notification.
Example 21: The method of any of examples 16 to 20, further comprising subscribing to the notification publish space.
Example 22: The method of any of examples 16 to 21, further comprising detecting falsely identified failures.
Example 23: The method of example 22, wherein detecting falsely identified failures comprises at least one of: integrating reports from multiple of the at least one logical entity; or an artificial intelligence or machine learning model.
Example 24: An example method includes receiving a identifier from a central entity of an access network node, the identifier used to identify a notification publish space of a radio access network data storage function; subscribing to the notification publish space of the radio access network data storage function using the identifier of the central entity being monitored for failure, the notification publish space used to provide or receive information concerning a failure of at least one logical entity of the access network node or of another access network node being monitored for failure; and receiving a notification of failure of the at least one logical entity with the notification publish space of the radio access network data storage function, the notification of failure comprising an identifier of the failed at least one logical entity.
Example 25: The method of claim 24, further comprising: detecting the failure of the at least one logical entity; and notifying a data storage function of the failure, the notifying comprising including an identifier of the failed at least one logical entity.
Example 26: The method of example 25, wherein detecting the failure is performed with at least one of: at least one service response timer expiry; at least one transport network failure detection timer expiry; or an artificial intelligence or machine learning method indicating a probability of failure at a given time or time period.
Example 27: The method of any of examples 25 to 26, further comprising filtering the at least one logical entity prior to notifying the data storage function of the failure of the at least one logical entity, such that a first subset of the at least one logical entity receives a failure notification, and a second subset of the at least one logical entity does not receive the failure notification due to not being affected with the failure.
Example 28: The method of any of examples 24 to 27, wherein the central entity comprises either a central unit control plane entity or a near real time radio intelligent controller.
Example 29: The method of any of examples 24 to 28, wherein the at least one logical entity comprises: a central unit control plane entity; a central unit user plane entity; a distributed unit; or a near real time radio intelligent controller.
Example 30: An example method includes detecting a failure of at least one logical entity of an access network node being monitored for failure; and transmitting a notification to a radio access network data storage function of the failure of the at least one logical entity, the notification comprising an identifier of the failed at least one logical entity; wherein the notification is configured to be used with the radio access network data storage function to notify subscribers of a notification publish space concerning the failure of the at least one logical entity; wherein the notification publish space is accessible to the subscribers of the notification publish space to be notified of the failure.
Example 31: The method of example 30, wherein detecting the failure comprises utilizing previously collected failure statistics and other information stored within a radio access network data storage function.
Example 32: The method of any of examples 30 to 31, wherein detecting the failure is performed with at least one of: at least one service response timer expiry; at least one transport network failure detection timer expiry; or an artificial intelligence or machine learning method indicating a probability of failure at a given time or time period.
Example 33: The method of any of examples 30 to 32, further comprising detecting falsely identified failures.
Example 34: The method of example 33, wherein detecting falsely identified failures comprises at least one of: integrating reports from multiple of the at least one logical entity; or an artificial intelligence or machine learning model.
Example 35: An example method includes creating an associated node list, the associated node list configured to be used for a notification of failure of at least one logical entity, wherein the notification of failure is performed using at last one point to point interface in a radio access network; wherein the associated node list is created and updated based on interface establishment between a plurality of logical entities of the at least one logical entity and/or a node configuration update procedure; and performing at least: receiving a failure notification of the at least one logical entity from a detecting logical entity that detected the failure, the failure notification including an identifier of the failed at least one logical entity, and transmitting the notification of failure of at least one logical entity using the associated node list and the identifier; detecting the failure of the at least one logical entity, and transmitting the notification of failure of at least one logical entity using the associated node list and the identifier; or failing of a central unit control plane entity, wherein: a failure notification of the failing central unit control plane entity is transmitted to a standby entity from a near real time radio intelligent controller having an inactive interface established with the standby entity, or from the at least one logical entity having an inactive interface established with the standby entity, where the stand by entity transmits the notification of failure using the associated node list and the identifier to a non-failing at least one logical entity, after the at least one logical entity has detected the failure; or the notification of failure is transmitted from the near real time radio intelligent controller to the non-failing at least one logical entity with use of the associated node list, after the near real time radio intelligent controller has detected the failure or after the at least one logical entity has detected the failure and has notified the near real time radio intelligent controller; wherein the at least one logical entity, the plurality of logical entities, the central unit control plane entity, the standby entity, and the non-failing at least one logical entity are entities of at least one access network node.
Example 36: The method of example 35, further comprising: receiving, with the central unit control plane entity, an indication of an addition or change related to the interface establishment; and updating, with the central unit control plane entity, the associated node list with the addition or change related to the interface establishment.
Example 37: The method of any of examples 35 to 36, wherein the associated node list is created with the central unit control plane entity.
Example 38: The method of any of examples 35 to 37, further comprising synchronizing the associated node list with the standby entity, wherein the standby entity comprises a standby central unit control plane entity.
Example 39: The method of any of examples 35 to 38, further comprising transmitting the associated node list to the near real time radio intelligent controller for storage.
Example 40: The method of any of examples 35 to 39, wherein the associated node list is transmitted to the near real time radio intelligent controller using an interface node configuration update extended with an information element including the associated node list.
Example 41: The method of any of examples 35 to 40, wherein the associated node list is transmitted to the near real time radio intelligent controller using an associated node list notify procedure.
Example 42: The method of any of examples 35 to 41, wherein the receiving of the failure notification of the at least one logical entity from the detecting logical entity that detected the failure occurs in response to a failure of the near real time radio intelligent controller.
Example 43: The method of any of examples 35 to 42, wherein the detecting logical entity comprises another entity.
Example 44: The method of any of examples 35 to 43, further comprising receiving the failure notification of the at least one logical entity in response to detection of the failure with another entity.
Example 45: The method of example 44, further comprising receiving the failure notification of the at least one logical entity from the another entity.
Example 46: The method of any of examples 35 to 45, further comprising: receiving the failure notification of the at least one logical entity in response to detection of the failure with a distributed unit; and receiving the failure notification of the at least one logical entity from the distributed unit.
Example 47: The method of any of examples 35 to 46, wherein the failing of the central unit control plane entity is detected with another entity.
Example 48: The method of any of examples 35 to 47, wherein the failing of the central unit control plane entity is detected with the near real time radio intelligent controller.
Example 49: The method of example 48, wherein the failing of the central unit control plane entity is detected with the near real time radio intelligent controller via an E2 interface.
Example 50: The method of any of examples 35 to 49, wherein the failing of the central unit control plane entity is detected with a distributed unit.
Example 51: The method of example 50, wherein the failing of the central unit control plane entity is detected with the distributed unit via an F1 interface.
Example 52: The method of any of examples 35 to 51, wherein the failing of the central unit control plane entity is detected with a central unit user plane entity.
Example 53: The method of example 52, wherein the failing of the central unit control plane entity is detected with the central unit user plane entity via an E1 interface.
Example 54: The method of any of examples 35 to 53, wherein the failing of the central unit control plane entity is detected with another central unit control plane entity.
Example 55: The method of example 54, wherein the failing of the central unit control plane entity is detected with the another central unit control plane entity via an Xn interface.
Example 56: The method of any of examples 35 to 55, wherein the failing of the central unit control plane entity is detected with an access and mobility management function.
Example 57: The method of example 56, wherein the failing of the central unit control plane entity is detected with the access and mobility management function via an NG-C interface.
Example 58: The method of any of examples 35 to 57, wherein the failing of the central unit control plane entity is detected with a service management and orchestration node.
Example 59: The method of example 58, wherein the failing of the central unit control plane entity is detected with the service management and orchestration node via an O1 interface.
Example 60: The method of any of examples 35 to 59, wherein in response to the failing of the central unit control plane entity, the near real time radio intelligent controller notifies at least one node within the associated node list that has established an interface with the near real time radio intelligent controller.
Example 61: The method of any of examples 35 to 60, wherein failure detection is performed with at least one of: at least one service response timer expiry; at least one transport network failure detection timer expiry; or an artificial intelligence or machine learning method indicating a probability of failure at a given time or time period.
Example 62: The method of example 61, further comprising: detecting falsely identified failures; wherein detecting falsely identified failures comprises at least one of: integrating reports from multiple of the at least one logical entity; or an artificial intelligence or machine learning model.
Example 63: The method of any of examples 35 to 62, wherein: the failed at least one logical entity comprises a service of the near real time radio intelligent controller; and the notification of failure comprises providing information concerning the service.
Example 64: The method of any of examples 35 to 63, wherein the associated node list is filtered prior to transmission of the notification of failure, such that a first subset of the at least one logical entity receives the notification of failure, and a second subset of the at least one logical entity does not receive the notification of the failure due to not being affected with the failure.
Example 65: An example method includes establishing an interface with at least one logical entity; and detecting a failure of the at least one logical entity and transmitting a failure notification of the at least one logical entity, or receiving a notification of failure of the at least one logical entity; wherein the notification of failure is received using an associated node list, the associated node list having been created and updated based on interface establishment between a plurality of logical entities of the at least one logical entity and/or a node configuration update procedure; wherein the at least one logical entity and the plurality of logical entities are entities of at least one access network node.
Example 66: The method of example 65, wherein the failure notification is transmitted to a central unit control plane entity.
Example 67: The method of any of examples 65 to 66, wherein the failure notification is transmitted to a standby central unit control plane entity in response to the standby central unit control plane entity existing, and in response to a failure of a central unit control plane entity.
Example 68: The method of any of examples 65 to 67, wherein the failure notification is transmitted to a near real time radio intelligent controller in response to a standby central unit control plane entity not existing, and in response to a failure of a central unit control plane entity.
Example 69: The method of any of examples 65 to 68, wherein the notification of failure is received from a central unit control plane entity.
Example 70: The method of any of examples 65 to 69, wherein the notification of failure is received from a near real time radio intelligent controller.
Example 71: The method of any of examples 65 to 70, wherein the notification of failure is received from a standby central unit control plane entity.
Example 72: The method of example 71, wherein the standby central unit control plane entity is coupled with an inactive interface connection to a near real time radio intelligent controller, where the active central unit control plane entity has a connection with a near real time radio intelligent controller.
Example 73: The method of any of examples 71 to 72, wherein the standby central unit control plane entity is coupled with an inactive interface connection to the at least one logical entity, where the at least one logical entity has a connection with an active central unit control plane entity.
Example 74: The method of example 73, wherein the at least one logical entity comprises a central unit user plane entity.
Example 75: The method of example 74, wherein the inactive interface connection comprises an E1 interface.
Example 76: The method of any of examples 73 to 75, wherein the at least one logical entity comprises a distributed unit.
Example 77: The method of example 76, wherein the inactive interface connection comprises an F1 interface.
Example 78: An example method includes receiving an associated node list from a central unit control plane entity, the associated node list having been created based on interface establishment between a plurality of logical entities of at least one logical entity and/or a node configuration update procedure; storing the associated node list, wherein the associated node list configured to be used for a notification of failure of the at least one logical entity; detecting the failure of the at least one logical entity; and performing either: transmitting a failure notification to a standby central unit control plane entity, wherein the standby central unit control plane entity transmits the notification of failure using the associated node list, and transmitting the failure notification to the central unit control plane entity in response to the failure of the at least one logical entity being attributed to a distributed unit or a central unit user plane entity; or transmitting the notification of failure to a set of the at least one logical entity using the associated node list; wherein the associated node list is stored with a near real time radio intelligent controller; wherein the at least one logical entity, the plurality of logical entities, the central unit control plane entity, and the standby central unit control plane entity are entities of at least one access network node.
Example 79: The method of example 78, wherein the standby central unit is coupled to the near real time radio intelligent controller with an inactive interface connection.
Example 80: The method of any of examples 78 to 79, further comprising receiving an inactive interface setup request from the standby central unit control plane entity.
Example 81: The method of any of examples 78 to 80, further comprising transmitting a response to an inactive interface setup request from the near real time radio intelligent controller.
Example 82: The method of any of examples 78 to 81, wherein failure detection is performed with at least one of: at least one service response timer expiry; at least one transport network failure detection timer expiry; or an artificial intelligence or machine learning method indicating a probability of failure at a given time or time period.
Example 83: The method of claim 82, further comprising: detecting falsely identified failures; wherein detecting falsely identified failures comprises at least one of: integrating reports from multiple of the at least one logical entity; or an artificial intelligence or machine learning model.
Example 84: An example method includes synchronizing an associated node list between a central unit control plane entity and a standby central unit control plane entity, the associated node list configured to be used for transmission of a notification of failure; storing the associated node list; wherein the associated node list is created based on interface establishment between a plurality of logical entities of at least one logical entity; receiving a failure notification from a near real time radio intelligent controller or the at least one logical entity; and transmitting the notification of failure to the at least one logical entity using the associated node list; wherein the at least one logical entity, the plurality of logical entities, the central unit control plane entity, and the standby central unit control plane entity are entities of at least one access network node.
Example 85: The method of example 84, further comprising establishing at least one inactive interface with the at least one logical entity having an established interface with the central unit control plane entity.
Example 86: The method of example 85, further comprising receiving a setup response message in response to having completed the establishing of the at least one inactive interface with the at least one logical entity.
Example 87: The method of any of examples 84 to 86, further comprising transmitting an inactive interface setup request from the standby central unit control plane entity to the near real time radio intelligent controller.
Example 88: The method of an of examples 84 to 87, further comprising receiving a response to an inactive interface setup request from the near real time radio intelligent controller.
Example 89: The method of any of examples 84 to 88, wherein the failure notification is received from the near real time radio intelligent controller.
Example 90: The method of any of examples 84 to 89, wherein the failure notification is received from the at least one logical entity.
Example 91: An example method includes detecting a failure of a first network element with a second network element; notifying the failure of the first network element with the second network element to a central entity; notifying the failure of the first network element with the central entity to nodes within an associated node list; wherein the associated node list is created and updated based on interface establishment between a plurality of logical entities and/or a node configuration update procedure; wherein the first network element, the second network element, the central entity, and the plurality of logical entities are entities of at least one access network node.
Example 92: The method of example 91, wherein the first network element comprises a near real time radio intelligent controller, a central unit control plane entity, a central unit user plane entity, or a distributed unit.
Example 93: The method of any of examples 91 to 92, wherein the second network element that detects the failure of the first network element comprises a near real time radio intelligent controller, a central unit control plane entity, a central unit user plane entity, a distributed unit, another central unit control plane entity, an access and mobility management function, or a service management and orchestration node.
Example 94: The method of any of examples 91 to 93, wherein the associated node list comprises a near real time radio intelligent controller, a central unit control plane entity, a central unit user plane entity, a distributed unit, another central unit control plane entity, an access and mobility management function, and/or a service management and orchestration node.
Example 95: An example method includes creating a notification publish space to monitor failure, the notification publish space comprising an identifier of a central entity of an access network node being monitored for failure; wherein at least one logical entity of the access network node or of another access network node being monitored for failure subscribes to the notification publish space; detecting a failure of the central entity or of the at least one logical entity; transmitting a failure notification of the failure of the central entity or the at least one logical entity; and notifying the subscribers of the notification publish space concerning the failure of the central entity or the at least one logical entity.
Example 96: The method of example 95, wherein the failure notification of the failure comprises an identifier of the failed central entity or the identifier of the failed at least one logical entity.
Example 97: The method of any of examples 95 to 96, wherein the notifying the subscribers of the notification publish space concerning the failure of the central entity or the at least one logical entity comprises transmitting an identifier of the failed central entity or the failed at least one logical entity to the subscribers of the notification publish space.
Example 98: The method of any of examples 95 to 97, wherein the at least one logical entity subscribes to the notification publish space in response to having received an identifier of the central entity and associated publish space information.
Example 99: The method of any of examples 95 to 98, further comprising updating a publish space list with information concerning the failure of the central entity or the at least one logical entity.
Example 100: The method of any of examples 95 to 99, wherein the detecting of the failure of the central entity or of the at least one logical entity is performed with any entity of the at least one logical entity.
Example 101: An example apparatus includes at least one processor; and at least one memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to: receive an indication to create a notification publish space to monitor failure, from a central entity of an access network node, the notification publish space comprising an identifier of the central entity of the access network node being monitored for failure; create the notification publish space, and sending an acknowledgement of the indication to create the notification publish space to the central entity of the access network node; receive a subscription to the notification publish space from at least one logical entity of the access network node or of another access network node; receive a failure notification of a failure of the at least one logical entity being monitored for failure; and notify the subscribers of the notification publish space concerning the failure of the at least one logical entity.
Example 102: An example apparatus includes at least one processor; and at least one memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to: transmit an indication to create a notification publish space to a data storage function, the notification publish space comprising an identifier of a central entity of an access network node being monitored for failure; receive an acknowledgement of the indication to create the notification publish space from the data storage function; and transmit the identifier of the central entity and associated publish space information to at least one logical entity of the access network node or of another access network node being monitored for failure; wherein the identifier of the central entity is configured to be used with the at least one logical entity to subscribe to the notification publish space to receive information concerning a failure of the at least one logical entity of the access network node or of the another access network node
Example 103: An example apparatus includes at least one processor; and at least one memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to: receive a identifier from a central entity of an access network node, the identifier used to identify a notification publish space of a radio access network data storage function; subscribe to the notification publish space of the radio access network data storage function using the identifier of the central entity being monitored for failure, the notification publish space used to provide or receive information concerning a failure of at least one logical entity of the access network node or of another access network node being monitored for failure; and receive a notification of failure of the at least one logical entity with the notification publish space of the radio access network data storage function, the notification of failure comprising an identifier of the failed at least one logical entity.
Example 104: An example apparatus includes at least one processor; and at least one memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to: detect a failure of at least one logical entity of an access network node being monitored for failure; and transmit a notification to a radio access network data storage function of the failure of the at least one logical entity, the notification comprising an identifier of the failed at least one logical entity; wherein the notification is configured to be used with the radio access network data storage function to notify subscribers of a notification publish space concerning the failure of the at least one logical entity; wherein the notification publish space is accessible to the subscribers of the notification publish space to be notified of the failure.
Example 105: An example apparatus includes at least one processor; and at least one memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to: create an associated node list, the associated node list configured to be used for a notification of failure of at least one logical entity, wherein the notification of failure is performed using at last one point to point interface in a radio access network; wherein the associated node list is created and updated based on interface establishment between a plurality of logical entities of the at least one logical entity and/or a node configuration update procedure; and perform at least: receive a failure notification of the at least one logical entity from a detecting logical entity that detected the failure, the failure notification including an identifier of the failed at least one logical entity, and transmitting the notification of failure of at least one logical entity using the associated node list and the identifier; detect the failure of the at least one logical entity, and transmitting the notification of failure of at least one logical entity using the associated node list and the identifier; or fail of a central unit control plane entity, wherein: a failure notification of the failing central unit control plane entity is transmitted to a standby entity from a near real time radio intelligent controller having an inactive interface established with the standby entity, or from the at least one logical entity having an inactive interface established with the standby entity, where the stand by entity transmits the notification of failure using the associated node list and the identifier to a non-failing at least one logical entity, after the at least one logical entity has detected the failure; or the notification of failure is transmitted from the near real time radio intelligent controller to the non-failing at least one logical entity with use of the associated node list, after the near real time radio intelligent controller has detected the failure or after the at least one logical entity has detected the failure and has notified the near real time radio intelligent controller; wherein the at least one logical entity, the plurality of logical entities, the central unit control plane entity, the standby entity, and the non-failing at least one logical entity are entities of at least one access network node.
Example 106: An example apparatus includes at least one processor; and at least one memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to: establish an interface with at least one logical entity; and detect a failure of the at least one logical entity and transmitting a failure notification of the at least one logical entity, or receiving a notification of failure of the at least one logical entity; wherein the notification of failure is received using an associated node list, the associated node list having been created and updated based on interface establishment between a plurality of logical entities of the at least one logical entity and/or a node configuration update procedure; wherein the at least one logical entity and the plurality of logical entities are entities of at least one access network node.
Example 107: An example apparatus includes at least one processor; and at least one memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to: receive an associated node list from a central unit control plane entity, the associated node list having been created based on interface establishment between a plurality of logical entities of at least one logical entity and/or a node configuration update procedure; store the associated node list, wherein the associated node list configured to be used for a notification of failure of the at least one logical entity; detect the failure of the at least one logical entity; and perform either: transmit a failure notification to a standby central unit control plane entity, wherein the standby central unit control plane entity transmits the notification of failure using the associated node list, and transmitting the failure notification to the central unit control plane entity in response to the failure of the at least one logical entity being attributed to a distributed unit or a central unit user plane entity; or transmit the notification of failure to a set of the at least one logical entity using the associated node list; wherein the associated node list is stored with a near real time radio intelligent controller; wherein the at least one logical entity, the plurality of logical entities, the central unit control plane entity, and the standby central unit control plane entity are entities of at least one access network node.
Example 108: An example apparatus includes at least one processor; and at least one memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to: synchronize an associated node list between a central unit control plane entity and a standby central unit control plane entity, the associated node list configured to be used for transmission of a notification of failure; store the associated node list; wherein the associated node list is created based on interface establishment between a plurality of logical entities of at least one logical entity; receive a failure notification from a near real time radio intelligent controller or the at least one logical entity; and transmit the notification of failure to the at least one logical entity using the associated node list; wherein the at least one logical entity, the plurality of logical entities, the central unit control plane entity, and the standby central unit control plane entity are entities of at least one access network node.
Example 109: An example apparatus includes at least one processor; and at least one memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to: detect a failure of a first network element with a second network element; notify the failure of the first network element with the second network element to a central entity; notify the failure of the first network element with the central entity to nodes within an associated node list; wherein the associated node list is created and updated based on interface establishment between a plurality of logical entities and/or a node configuration update procedure; wherein the first network element, the second network element, the central entity, and the plurality of logical entities are entities of at least one access network node.
Example 110: An example apparatus includes at least one processor; and at least one memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to: create a notification publish space to monitor failure, the notification publish space comprising an identifier of a central entity of an access network node being monitored for failure; wherein at least one logical entity of the access network node or of another access network node being monitored for failure subscribes to the notification publish space; detect a failure of the central entity or of the at least one logical entity; transmit a failure notification of the failure of the central entity or the at least one logical entity; and notify the subscribers of the notification publish space concerning the failure of the central entity or the at least one logical entity.
Example 111: An example apparatus includes means for receiving an indication to create a notification publish space to monitor failure, from a central entity of an access network node, the notification publish space comprising an identifier of the central entity of the access network node being monitored for failure; means for creating the notification publish space, and sending an acknowledgement of the indication to create the notification publish space to the central entity of the access network node; means for receiving a subscription to the notification publish space from at least one logical entity of the access network node or of another access network node; means for receiving a failure notification of a failure of the at least one logical entity being monitored for failure; and means for notifying the subscribers of the notification publish space concerning the failure of the at least one logical entity.
Example 112: An example apparatus includes means for transmitting an indication to create a notification publish space to a data storage function, the notification publish space comprising an identifier of a central entity of an access network node being monitored for failure; means for receiving an acknowledgement of the indication to create the notification publish space from the data storage function; and means for transmitting the identifier of the central entity and associated publish space information to at least one logical entity of the access network node or of another access network node being monitored for failure; wherein the identifier of the central entity is configured to be used with the at least one logical entity to subscribe to the notification publish space to receive information concerning a failure of the at least one logical entity of the access network node or of the another access network node.
Example 113: An example apparatus includes means for receiving a identifier from a central entity of an access network node, the identifier used to identify a notification publish space of a radio access network data storage function; means for subscribing to the notification publish space of the radio access network data storage function using the identifier of the central entity being monitored for failure, the notification publish space used to provide or receive information concerning a failure of at least one logical entity of the access network node or of another access network node being monitored for failure; and means for receiving a notification of failure of the at least one logical entity with the notification publish space of the radio access network data storage function, the notification of failure comprising an identifier of the failed at least one logical entity.
Example 114: An example apparatus includes means for detecting a failure of at least one logical entity of an access network node being monitored for failure; and means for transmitting a notification to a radio access network data storage function of the failure of the at least one logical entity, the notification comprising an identifier of the failed at least one logical entity; wherein the notification is configured to be used with the radio access network data storage function to notify subscribers of a notification publish space concerning the failure of the at least one logical entity; wherein the notification publish space is accessible to the subscribers of the notification publish space to be notified of the failure.
Example 115: An example apparatus includes means for creating an associated node list, the associated node list configured to be used for a notification of failure of at least one logical entity, wherein the notification of failure is performed using at last one point to point interface in a radio access network; wherein the associated node list is created and updated based on interface establishment between a plurality of logical entities of the at least one logical entity and/or a node configuration update procedure; and means for performing at least: receiving a failure notification of the at least one logical entity from a detecting logical entity that detected the failure, the failure notification including an identifier of the failed at least one logical entity, and transmitting the notification of failure of at least one logical entity using the associated node list and the identifier; detecting the failure of the at least one logical entity, and transmitting the notification of failure of at least one logical entity using the associated node list and the identifier; or failing of a central unit control plane entity, wherein: a failure notification of the failing central unit control plane entity is transmitted to a standby entity from a near real time radio intelligent controller having an inactive interface established with the standby entity, or from the at least one logical entity having an inactive interface established with the standby entity, where the stand by entity transmits the notification of failure using the associated node list and the identifier to a non-failing at least one logical entity, after the at least one logical entity has detected the failure; or the notification of failure is transmitted from the near real time radio intelligent controller to the non-failing at least one logical entity with use of the associated node list, after the near real time radio intelligent controller has detected the failure or after the at least one logical entity has detected the failure and has notified the near real time radio intelligent controller; wherein the at least one logical entity, the plurality of logical entities, the central unit control plane entity, the standby entity, and the non-failing at least one logical entity are entities of at least one access network node.
Example 116: An example apparatus includes means for creating an associated node list, the associated node list configured to be used for a notification of failure of at least one logical entity, wherein the notification of failure is performed using at last one point to point interface in a radio access network; wherein the associated node list is created and updated based on interface establishment between a plurality of logical entities of the at least one logical entity and/or a node configuration update procedure; and means for performing at least: receiving a failure notification of the at least one logical entity from a detecting logical entity that detected the failure, the failure notification including an identifier of the failed at least one logical entity, and transmitting the notification of failure of at least one logical entity using the associated node list and the identifier; detecting the failure of the at least one logical entity, and transmitting the notification of failure of at least one logical entity using the associated node list and the identifier; or failing of a central unit control plane entity, wherein: a failure notification of the failing central unit control plane entity is transmitted to a standby entity from a near real time radio intelligent controller having an inactive interface established with the standby entity, or from the at least one logical entity having an inactive interface established with the standby entity, where the stand by entity transmits the notification of failure using the associated node list and the identifier to a non-failing at least one logical entity, after the at least one logical entity has detected the failure; or the notification of failure is transmitted from the near real time radio intelligent controller to the non-failing at least one logical entity with use of the associated node list, after the near real time radio intelligent controller has detected the failure or after the at least one logical entity has detected the failure and has notified the near real time radio intelligent controller; wherein the at least one logical entity, the plurality of logical entities, the central unit control plane entity, the standby entity, and the non-failing at least one logical entity are entities of at least one access network node.
Example 117: An example apparatus includes means for receiving an associated node list from a central unit control plane entity, the associated node list having been created based on interface establishment between a plurality of logical entities of at least one logical entity and/or a node configuration update procedure; means for storing the associated node list, wherein the associated node list configured to be used for a notification of failure of the at least one logical entity; means for detecting the failure of the at least one logical entity; and means for performing either: transmitting a failure notification to a standby central unit control plane entity, wherein the standby central unit control plane entity transmits the notification of failure using the associated node list, and transmitting the failure notification to the central unit control plane entity in response to the failure of the at least one logical entity being attributed to a distributed unit or a central unit user plane entity; or transmitting the notification of failure to a set of the at least one logical entity using the associated node list; wherein the associated node list is stored with a near real time radio intelligent controller; wherein the at least one logical entity, the plurality of logical entities, the central unit control plane entity, and the standby central unit control plane entity are entities of at least one access network node.
Example 118: An example apparatus includes means for synchronizing an associated node list between a central unit control plane entity and a standby central unit control plane entity, the associated node list configured to be used for transmission of a notification of failure; means for storing the associated node list; wherein the associated node list is created based on interface establishment between a plurality of logical entities of at least one logical entity; means for receiving a failure notification from a near real time radio intelligent controller or the at least one logical entity; and means for transmitting the notification of failure to the at least one logical entity using the associated node list; wherein the at least one logical entity, the plurality of logical entities, the central unit control plane entity, and the standby central unit control plane entity are entities of at least one access network node.
Example 119: An example apparatus includes means for detecting a failure of a first network element with a second network element; means for notifying the failure of the first network element with the second network element to a central entity; means for notifying the failure of the first network element with the central entity to nodes within an associated node list; wherein the associated node list is created and updated based on interface establishment between a plurality of logical entities and/or a node configuration update procedure; wherein the first network element, the second network element, the central entity, and the plurality of logical entities are entities of at least one access network node.
Example 120: An example apparatus includes means for creating a notification publish space to monitor failure, the notification publish space comprising an identifier of a central entity of an access network node being monitored for failure; wherein at least one logical entity of the access network node or of another access network node being monitored for failure subscribes to the notification publish space; means for detecting a failure of the central entity or of the at least one logical entity; means for transmitting a failure notification of the failure of the central entity or the at least one logical entity; and means for notifying the subscribers of the notification publish space concerning the failure of the central entity or the at least one logical entity.
Example 121: An example non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable with the machine for performing operations is provided/described, the operations comprising: receiving an indication to create a notification publish space to monitor failure, from a central entity of an access network node, the notification publish space comprising an identifier of the central entity of the access network node being monitored for failure; creating the notification publish space, and sending an acknowledgement of the indication to create the notification publish space to the central entity of the access network node; receiving a subscription to the notification publish space from at least one logical entity of the access network node or of another access network node; receiving a failure notification of a failure of the at least one logical entity being monitored for failure; and notifying the subscribers of the notification publish space concerning the failure of the at least one logical entity.
Example 122: An example non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable with the machine for performing operations is provided/described, the operations comprising: transmitting an indication to create a notification publish space to a data storage function, the notification publish space comprising an identifier of a central entity of an access network node being monitored for failure; receiving an acknowledgement of the indication to create the notification publish space from the data storage function; and transmitting the identifier of the central entity and associated publish space information to at least one logical entity of the access network node or of another access network node being monitored for failure; wherein the identifier of the central entity is configured to be used with the at least one logical entity to subscribe to the notification publish space to receive information concerning a failure of the at least one logical entity of the access network node or of the another access network node.
Example 123: An example non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable with the machine for performing operations is provided/described, the operations comprising: receiving a identifier from a central entity of an access network node, the identifier used to identify a notification publish space of a radio access network data storage function; subscribing to the notification publish space of the radio access network data storage function using the identifier of the central entity being monitored for failure, the notification publish space used to provide or receive information concerning a failure of at least one logical entity of the access network node or of another access network node being monitored for failure; and receiving a notification of failure of the at least one logical entity with the notification publish space of the radio access network data storage function, the notification of failure comprising an identifier of the failed at least one logical entity.
Example 124: An example non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable with the machine for performing operations is provided/described, the operations comprising: detecting a failure of at least one logical entity of an access network node being monitored for failure; and transmitting a notification to a radio access network data storage function of the failure of the at least one logical entity, the notification comprising an identifier of the failed at least one logical entity; wherein the notification is configured to be used with the radio access network data storage function to notify subscribers of a notification publish space concerning the failure of the at least one logical entity; wherein the notification publish space is accessible to the subscribers of the notification publish space to be notified of the failure.
Example 125: An example non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable with the machine for performing operations is provided/described, the operations comprising: creating an associated node list, the associated node list configured to be used for a notification of failure of at least one logical entity, wherein the notification of failure is performed using at last one point to point interface in a radio access network; wherein the associated node list is created and updated based on interface establishment between a plurality of logical entities of the at least one logical entity and/or a node configuration update procedure; and performing at least: receiving a failure notification of the at least one logical entity from a detecting logical entity that detected the failure, the failure notification including an identifier of the failed at least one logical entity, and transmitting the notification of failure of at least one logical entity using the associated node list and the identifier; detecting the failure of the at least one logical entity, and transmitting the notification of failure of at least one logical entity using the associated node list and the identifier; or failing of a central unit control plane entity, wherein: a failure notification of the failing central unit control plane entity is transmitted to a standby entity from a near real time radio intelligent controller having an inactive interface established with the standby entity, or from the at least one logical entity having an inactive interface established with the standby entity, where the stand by entity transmits the notification of failure using the associated node list and the identifier to a non-failing at least one logical entity, after the at least one logical entity has detected the failure; or the notification of failure is transmitted from the near real time radio intelligent controller to the non-failing at least one logical entity with use of the associated node list, after the near real time radio intelligent controller has detected the failure or after the at least one logical entity has detected the failure and has notified the near real time radio intelligent controller; wherein the at least one logical entity, the plurality of logical entities, the central unit control plane entity, the standby entity, and the non-failing at least one logical entity are entities of at least one access network node.
Example 126: An example non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable with the machine for performing operations is provided/described, the operations comprising: establishing an interface with at least one logical entity; and detecting a failure of the at least one logical entity and transmitting a failure notification of the at least one logical entity, or receiving a notification of failure of the at least one logical entity; wherein the notification of failure is received using an associated node list, the associated node list having been created and updated based on interface establishment between a plurality of logical entities of the at least one logical entity and/or a node configuration update procedure; wherein the at least one logical entity and the plurality of logical entities are entities of at least one access network node.
Example 127: An example non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable with the machine for performing operations is provided/described, the operations comprising: receiving an associated node list from a central unit control plane entity, the associated node list having been created based on interface establishment between a plurality of logical entities of at least one logical entity and/or a node configuration update procedure; storing the associated node list, wherein the associated node list configured to be used for a notification of failure of the at least one logical entity; detecting the failure of the at least one logical entity; and performing either: transmitting a failure notification to a standby central unit control plane entity, wherein the standby central unit control plane entity transmits the notification of failure using the associated node list, and transmitting the failure notification to the central unit control plane entity in response to the failure of the at least one logical entity being attributed to a distributed unit or a central unit user plane entity; or transmitting the notification of failure to a set of the at least one logical entity using the associated node list; wherein the associated node list is stored with a near real time radio intelligent controller; wherein the at least one logical entity, the plurality of logical entities, the central unit control plane entity, and the standby central unit control plane entity are entities of at least one access network node.
Example 128: An example non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable with the machine for performing operations is provided/described, the operations comprising: synchronizing an associated node list between a central unit control plane entity and a standby central unit control plane entity, the associated node list configured to be used for transmission of a notification of failure; storing the associated node list; wherein the associated node list is created based on interface establishment between a plurality of logical entities of at least one logical entity; receiving a failure notification from a near real time radio intelligent controller or the at least one logical entity; and transmitting the notification of failure to the at least one logical entity using the associated node list; wherein the at least one logical entity, the plurality of logical entities, the central unit control plane entity, and the standby central unit control plane entity are entities of at least one access network node.
Example 129: An example non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable with the machine for performing operations is provided/described, the operations comprising: detecting a failure of a first network element with a second network element; notifying the failure of the first network element with the second network element to a central entity; notifying the failure of the first network element with the central entity to nodes within an associated node list; wherein the associated node list is created and updated based on interface establishment between a plurality of logical entities and/or a node configuration update procedure; wherein the first network element, the second network element, the central entity, and the plurality of logical entities are entities of at least one access network node.
Example 130: An example non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable with the machine for performing operations is provided/described, the operations comprising: creating a notification publish space to monitor failure, the notification publish space comprising an identifier of a central entity of an access network node being monitored for failure; wherein at least one logical entity of the access network node or of another access network node being monitored for failure subscribes to the notification publish space; detecting a failure of the central entity or of the at least one logical entity; transmitting a failure notification of the failure of the central entity or the at least one logical entity; and notifying the subscribers of the notification publish space concerning the failure of the central entity or the at least one logical entity.
Example 131: An apparatus comprising circuitry configured to perform the method of any of examples 1 to 15.
Example 132: An apparatus comprising circuitry configured to perform the method of any of examples 16 to 23.
Example 133: An apparatus comprising circuitry configured to perform the method of any of examples 24 to 29.
Example 134: An apparatus comprising circuitry configured to perform the method of any of examples 30 to 34.
Example 135: An apparatus comprising circuitry configured to perform the method of any of examples 35 to 64.
Example 136: An apparatus comprising circuitry configured to perform the method of any of examples 65 to 77.
Example 137: An apparatus comprising circuitry configured to perform the method of any of examples 78 to 83.
Example 138: An apparatus comprising circuitry configured to perform the method of any of examples 84 to 90.
Example 139: An apparatus comprising circuitry configured to perform the method of any of examples 91 to 94.
Example 140: An apparatus comprising circuitry configured to perform the method of any of examples 95 to 100.
Example 141: An apparatus comprising means for performing the method of any of examples 1 to 15.
Example 142: An apparatus comprising means for performing the method of any of examples 16 to 23.
Example 143: An apparatus comprising means for performing the method of any of examples 24 to 29.
Example 144: An apparatus comprising means for performing the method of any of examples 30 to 34.
Example 145: An apparatus comprising means for performing the method of any of examples 35 to 64.
Example 146: An apparatus comprising means for performing the method of any of examples 65 to 77.
Example 147: An apparatus comprising means for performing the method of any of examples 78 to 83.
Example 148: An apparatus comprising means for performing the method of any of examples 84 to 90.
Example 149: An apparatus comprising means for performing the method of any of examples 91 to 94.
Example 150: An apparatus comprising means for performing the method of any of examples 95 to 100.
Example 151: An apparatus comprising at least one processor; and at least one memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform the method of any of examples 1 to 15.
Example 152: An apparatus comprising at least one processor; and at least one memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform the method of any of examples 16 to 23.
Example 153: An apparatus comprising at least one processor; and at least one memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform the method of any of examples 24 to 29.
Example 154: An apparatus comprising at least one processor; and at least one memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform the method of any of examples 30 to 34.
Example 155: An apparatus comprising at least one processor; and at least one memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform the method of any of examples 35 to 64.
Example 156: An apparatus comprising at least one processor; and at least one memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform the method of any of examples 65 to 77.
Example 157: An apparatus comprising at least one processor; and at least one memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform the method of any of examples 78 to 83.
Example 158: An apparatus comprising at least one processor; and at least one memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform the method of any of examples 84 to 90.
Example 159: An apparatus comprising at least one processor; and at least one memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform the method of any of examples 91 to 94.
Example 160: An apparatus comprising at least one processor; and at least one memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform the method of any of examples 95 to 100.
It should be understood that the foregoing description is only illustrative. Various alternatives and modifications may be devised by those skilled in the art. For example, features recited in the various dependent claims could be combined with each other in any suitable combination(s). In addition, features from different embodiments described above could be selectively combined into a new embodiment.
Accordingly, this description is intended to embrace all such alternatives, modifications and variances which fall within the scope of the appended claims.
When a reference number as used herein is of the form y-x, this means that the referred to item may be an instantiation of (or type of) reference number y. For example, E2 node 434-2 and E2 node 434-3 in
In the figures, lines represent couplings and arrows represent directional couplings or direction of data flow in the case of use for an apparatus, and lines represent couplings and arrows represent transitions or direction of data flow in the case of use for a method or signaling diagram.
The following acronyms and abbreviations that may be found in the specification and/or the drawing figures are defined as follows (different acronyms may be appended using a dash/hyphen e.g. “-” or with parentheses e.g. “( )”):
Number | Date | Country | Kind |
---|---|---|---|
202111038917 | Aug 2021 | IN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2022/073423 | 8/23/2022 | WO |