The disclosed methods and apparatus relate generally to wireless communication networks, and in particular, the disclosed methods and apparatus relate to dynamically switching between split-option architectures of wireless networks based on real-time and non-real-time measurements and inputs wherein the split-option architectures are switched to optimize user equipment (UE) experiences and network performance.
The wireless industry has experienced tremendous growth in recent years. Wireless technology is rapidly improving, and faster and more numerous broadband communication networks have been installed around the globe. These networks have now become key components of a worldwide communication system that connects people and businesses at speeds and on a scale unimaginable just a couple of decades ago. The rapid growth of wireless communication is a result of increasing demand for more bandwidth and services. This rapid growth is in many ways supported by standards. For example, 4G LTE has been widely deployed over the past years, and the next generation system, 5G NR (New Radio) is now being deployed. In these wireless systems, multiple mobile devices are served voice services, data services, and many other services over wireless connections so they may remain mobile while still connected.
It is commonplace today for communications to occur over a wireless network in which user equipment (UE) connects to the network via a wireless transceiver, such an eNodeB, gNodeB, access point or base station, hereafter referred to generically as a BS/AP (base station/Access Point). In this disclosure the term eNodeB is shortened to the term “eNB” or “gNB” and is used generically to refer to the following: a single sector eNB/gNB; a dual sector eNB/gNB, with each sector acting independently; and a node that supports both eNB and gNB functions. The UE may be a wireless cellular telephone, tablet, computer, Internet-of-Things (IoT) device, or other such wireless equipment. The BS/AP may be an eNodeB (“eNB”) as defined in 3GPP specifications for long term evolution (LTE) systems (sometimes referred to as 4th Generation (4G) systems) or a gNodeB as defined in 3GPP specifications for new radio (NR) systems (sometimes referred to as 5G systems). Furthermore, the BS/AP may be a single sector node or a dual sector node in which each of two sectors act independently. In 4G and 5G systems, there are times when a relatively large number of UEs may be attempting to access the network through the same “cell”.
In many cases, there is a mix of UEs, some requiring high throughput with data arriving in bursts and other UEs requiring minimal throughput, but having frequent data transmit and receive requirements. The term ‘BS/AP” is used broadly herein to include base stations and access points, including at least an evolved NodeB (eNB) of an LTE network or gNodeB (gNB) of a 5G network, a cellular base station (BS), a Citizens Broadband Radio Service Device (CBSD) (which may be an LTE or 5G device), a Wi-Fi access node, a Local Area Network (LAN) access point, a Wide Area Network (WAN) access point, and should also be understood to include other network receiving hubs that provide access to a network of a plurality of wireless transceivers within range of the BS/AP. Typically, the BS/APs are used as transceiver hubs, whereas the UEs are used for point-to-point communication and are not used as hubs. Therefore, the BS/APs transmit at a relatively higher power than the UEs.
As shown in
As described in more detail below with reference to
RAN deployments can be implemented and deployed in different ways using different architectures to meet system demands and to satisfy user demands and experiences. The 5G RAN has a number of architecture options, such as how to split RAN functions, where to place those functions, and what transport is used to interconnect them. The BS/AP 103 can be deployed as a monolithic unit deployed at a cell site, as in cellular networks, or split between the CU, DU, RU and RRUs. The CU-DU split is typically a higher layer split (HLS), which is more tolerant to delay. The DU-RU interface is a lower-layer split (LLS), which is more latency-sensitive and demanding on bandwidth. CUs, DUs, RUs, and RRUs may be deployed at locations such as cell sites (including towers, rooftops and associated cabinets and shelters), transport aggregation sites and “edge sites” (for example, central offices or local exchange sites).
The type of RAN architecture to use and the placement of the CU, DU, RU and RRU nodes within the RAN network depends upon the needs of the RAN operator and its users. Trade-offs are not clear cut, and different architectures have advantages and disadvantages in terms of latency, jitter, and bandwidth between the RAN and the UEs it services. Usage patterns, device capabilities, operating costs, RF strategies, and existing RF network footprints and capabilities influence network architecture decisions. RAN functional split-options (splitting the functions of the CU and DU) provide alternative RAN network architectures and alternative RAN network deployments.
In some embodiments, the gNB comprises a CU and at least one DU connected to the CU. A CU with multiple DUs support multiple gNB s. The functional split architecture lets a 5G network utilize different distributions of protocol stacks between CUs and DUs depending on mid-haul availability and network design. In some embodiments, the CU is a logical node that includes the gNB functions such as transfer of user data, mobility control, RAN sharing (MORAN), positioning, session management etc., except for functions that are exclusively allocated to the DU. In some embodiments, the CU controls the operation of several DUs over a mid-haul interface. As described in more detail below, the CU can, in some embodiments, be co-located on the same site as the DU, or located at a distance away from the DU.
In typical, or “normal” types of deployments, the base station functionality, or the AP functionality, is either concentrated in a specific place, or it is essentially split in a defined way throughout the RAN (Radio Access Network) network. Various split-options are well-defined for the 5G RAN architectures. Currently there are eight (8) different and distinct functional split-options specified in the standards. These functional split-options include split-option 1, 2, 3, 4, 5, 6, 7.1, 7.2, 7.2x (wherein the “x” stands for “a” or “b”) and 8. As is described in greater detail below, the present split-option switching methods and apparatus primarily focus on split options 2, 6, and 7.2x, but these are exemplary and the presently disclosed split-option switching methods and apparatus are not limited to just these the functional split-options described and shown in the figures. The split of functionality and physical locations essentially and primarily are between three components or nodes—the RU (Radio Unit), the DU and the CU. Which functions of the RAN (Radio Access Network) performed by these three nodes are defined in and by the different split-options set forth in the functional split-option specifications.
As noted above, RAN network logical architectures, such as the RAN 100 of
As shown in
Typical RAN network deployments chose which functional split-option to implement and deploy their networks accordingly. The architecture that is deployed is therefore static, and disadvantageously does not adapt to UE and user needs and experiences. Therefore, there is a need for a dynamic split-option architecture wherein split-option deployments are dynamically switched from one split-option to another to meet UE needs and user experiences. Typical RAN architectures do not have this dynamic capability. The present split-option switching methods and apparatus provides such flexibility. It provides an ability to move instances of the CU and DU to better facilitate the end users' experiences.
The disclosed method and apparatus, in accordance with one or more various embodiments, is described with reference to the following figures. The drawings are provided for purposes of illustration only and merely depict examples of some embodiments of the disclosed method and apparatus. These drawings are provided to facilitate the reader's understanding of the disclosed method and apparatus. They should not be considered to limit the breadth, scope, or applicability of the claimed invention. It should be noted that for clarity and ease of illustration these drawings are not necessarily made to scale.
The figures are not intended to be exhaustive or to limit the claimed invention to the precise form disclosed. It should be understood that the disclosed method and apparatus can be practiced with modification and alteration, and that the invention should be limited only by the claims and the equivalents thereof.
Referring now to
A second architectural variant 400 of an AP architecture is shown in
The decision to use either the first architectural variant 300 of
Architecture variants 300 and 400 of
The different sub-variants of the different architectural variants 300 and 400 implement the various Split Option architectures. For example, the first sub-variants implement Option 2. The third sub-variants implement either option 6 or option 7.2 depending upon the split in the L1 and L2 communication protocol layering. If there is a split between the DU 212′ and lower RU 210″ at the interface of the L2 and L1, then this architecture implements an Option 6. If there is a split between the DU 212′ and the lower RU 210″ inside L1 (referred to as High L1 and Low L1), then the architecture implements a 7.2 Split Option.
As described in the Background section above, prior art RAN network deployment typically use static versions of the functional split-options shown in
In contrast, the present split-option switching methods and apparatus described herein is dynamic, and changes depending on both the network needs and on the users' needs, at a very high level. The architectural sub-variant of
Depending on a given situation, and depending on the functionality, Quality of Experience to be delivered, as shown in the sub-variants of the architectural variant 300 shown in
There is potentially a third architectural variant which is neither shown in the figures nor described in greater detail herein. This variant is similar to that shown in
In one example, a given network deployment 100, 200, will support one of the two architecture variants shown in
Table 1, set forth below, shows the different architectures and Split Options that may be used in some embodiments of the RAN of
TABLE 1 sets forth the triggers that set off a split option handover from a first sub-variant to a second sub-variant if the trigger occurs. Triggers can make the UE transition to an Option 2, Option 6 or an Option 7.2 split-option, if necessary to make the system more reliable and reduce Hand-In (HI) signaling, reduce Hand-in rates, etc. If the network is experiencing an unusual amount of Handovers (HOs) at any point in time, the network is made more reliable by performing a switch of split option signaling from one split option to another. The spreadsheet sets forth what triggers a change from one split option sub-variant architecture to another. For example, and referring now to Table 1, split Option 2 is triggered using Split Option switching, if a Reduction of HO signaling is necessitated, and if the Hand-In rate is high. Enablement of Dual Connectivity where a single CU can be hosted centrally and connected with multiple DU. The central CU plus a central DU and the bottom RU (see, for example, RU 210″ of
In some embodiments, this can be conceived as a network handover (HO), wherein a UE moves from a RU/DU combination of nodes (one sub-variant), to a RU/DU/CU combination of nodes (another sub-variant). The context is handed over to the RU/DU/CU combination. In these embodiments, the context that was maintained in the Core Network, 114, where the CU entity was sitting in the Core 114 for the RU/DU combination, moves to the RU/DU/CU combination. The context thereafter resides there (in the RU/DU/CU combination), the PFC load reduces, and the UE needs are met.
Moving instances of the CU and DU to better facilitate the end users' experiences is both novel and nonobvious in light of the prior art, and provides tremendous advantages over the prior art static deployments. As noted above, the prior art network deployments do not have this capability as they are static and fixed after deployment. In accordance with the disclosed split-option switching methods and apparatus, the instances of the CU and DU nodes may be moved within the RAN to optimize the quality of experience of the UEs.
Typically, in order to improve network performance and maximize the quality of experience at the UEs, an important goal is to put the functionality (RU/DU/CU) closest to the device (UE) that it is going to be interacting with a majority of the time. The user (UE) is closest to the Radio Unit (RU). The context of the DU and CU is brought closer to the Edge Node 120 if the requirement of a particular UE requires it, due to the flow that the particular UE demands. As a result, everything is moved over. Ultimately, the network is dynamically adapted to the needs and experiences of the user (UE). Performance concerns include latency, performance, load balancing (for example, if there are there too many users) across the Edge Node 120 and the other nodes. It is possible to perform load balancing between two different architectures. This load balancing may be performed not necessarily due to a difference between the two different architectures, but rather to simply provide load balancing of one architecture as compared with the other. In other words, the load balancing may be performed to achieve a more appropriate load balance between the two different architectures.
Load balancing—in this context, the term “load balancing” refers to balancing the number of UEs that are handled by any one particular architecture, and more specifically, by any one particular sub-variant of the variant architectures shown in
The main characteristic and functionality addressed by the present split option switching methods and apparatus of the present disclosure is to dynamically switch a 5G access network to support operation across different possible network architecture splits. The following problems are additionally addressed by the present split option switching methods and apparatus:
The main problem that the present methods and apparatus solve is hosting UEs in the optimal gNB split option which include all UEs to be associated to a specific split option. The two main criteria for deciding which split option architecture to use are: (a) Network jitter and latency that can affect a specific split option mode of operation of a gNB; and (b) Resource constraints that prompt the necessity for sharing of user profiles across different options provided by the access network. In some embodiments, the criteria may also include “performance” criteria, meaning “throughput” and latency. For example, assume an object or something is in the line-of-sight between a selected UE and the selected UE's associated remote RU (the remote RU currently being used by the selected UE), and further assume that the UE is streaming video. The interference caused by the object in the line-of-sight between the selected UE and the associated remote RU can be reduced or completely accounted for by moving the functionality performed by the currently used remote RU to another remote RU that has a clearer line-of-sight to the selected UE. This thereby improves the throughput to the selected UE. Performance can be measured as throughput, user throughput, and also the quality of experience (latency). In some embodiments, latency, jitter and throughput are the main criteria that are used to measure the quality experienced by the users and their respective UEs.
Description with Technical Advantages and Benefits Provided by the Present Split Option Switching Methods and Apparatus
A given CU integrates with both the Option 2 and the Option 6 split options—The Option 2 split is considered at a “higher” part of the link layer, and the Option 6 split is considered at a lower part of the link layer. The RUs may comprise anything. For example, the RUs may comprise CAT-A (outdoor) or CAT-B (indoor) CBSDs.
Heterogenous deployments that include both indoor and outdoor APs are accommodated by the present split option switching methods and apparatus. In this context, the term “heterogenous deployments” comprise deployments wherein the radio footprint or coverage of the deployed network has an umbrella-type shape. These include deployments wherein the outdoor CAT-A and indoor CAT-B antennas have considerable overlapping radio coverage. There is a heavy overlap of the outdoor with the indoor radio footprint coverage at the edges of the deployments.
In some embodiments, registration with the SAS is performed based on the required channel allocation for the option 2 and option 6 split options—associated with the RU node (RRU or integrated RU+DU). Therefore, in these embodiments, registration with the SAS is the same for both split option 2 and split option 6.
This disclosure describes an exemplary approach of switching a UE context between split option 2 (integrated DU and RU) and split option 6 (DU and Remote-RU (RRU)). Both split option 2 and split option 6 can comprise either CAT-A or CAT-B CBSDs.
The present Split Option Switching methods and apparatus also accounts for resource constraints that prompt the necessity for sharing of user profiles across different options provided by the access network. These resources cannot already be occupied or used by other UEs. The network must find some space for UEs that want to occupy these network resources.
In some embodiments, a Network Adaptor Tool is the tool that determines when to perform the split option switching between a first split option and a second split option. The Network Adaptor Tool provides the information necessary to trigger when to consider performing the split option switching. The fundamental paradigm of shifting either a gNB operation or a user profile is necessitated by an adaptor tool that tracks the required resources. The network adaptor tool works in a client and server architecture wherein the peers sit in every gNB of a cluster and the Edge Node 120 associated to the cluster, which comprises the network deployment at a customer site. In some embodiments, this could either be an indoor or outdoor customer site.
In some embodiments, the Network Adaptor Tool input the following five measurements to determine when to make a change in the split-option switching currently being used by a deployed network. These five measurements and network attributes used in some embodiments for consideration by the Network Adaptor Tool are set forth in detail in the following paragraphs:
Once deployed, the architecture of the network remains the same. It does not change. When a given UE enters into different options, dynamically, choices are made which split option to use for the UE. These split options are made dynamically, and can change. Whether to put the UE in a split option 2, split option 6 or split option 7.2, will depend on the factors considered by and decisions made by the Net Adaptor Tool. This will depend, among other things, the campus deployment. The network Architecture is fixed once the network is deployed. UEs are switched between split options based upon triggers. However, this does not mean that the deployment architecture changes in any way. The network asks the UE to point to split option 2, split option 6 or split option 7.2., depending upon the connectivity and the cell that is needed for that user and that UE. For any given point within the network that a UE may be located, that point is covered by cells that may implement split option 2, split option 6 and split option 7.2. All three have a footprint in the same location that covers service for a selected UE. Based on the service, and the delay requirement, and the performance criteria which occurs in real-time in the network, the UE is asked to connect to a cell that is operating in Option 2, or to connect to a cell that is operating in Option 6, or to a cell that is operating in Option 7.2. The UE is pointed to, or is asked to switch to the different cells that have the different options.
The accompanying diagrams (
It should be noted that no prior art deployments use a Network Adaptor Tool to make split-option-switching decisions. It should also be noted that other measurements/criteria could (and may) be used to make the determination of when to perform split-option-switching. The measurements/criteria described herein are exemplary only. No matter what measurements and criteria are used, the decision of whether or not to perform split-option switching is dynamic, based on the output/decisions of the Network Adaptor Tool described above.
Split-Option Switching vs. Normal Handovers
Split-option-switching is a terminology used to reference a moving of the UE context across (DU+RU/DU+RRU)s integrated into a common CU. It is a handover triggered by the QoS and the enterprise wireline network conditions. Essentially, it is a network assisted Handover (HO). In contrast, in a “Normal Handover”, the UE keeps informing the base stations about changes in RF conditions which causes the UE to transition from one base station to another. The HO is triggered by RF conditions experienced by the UE. This is the classical mobile-assisted HO. The benefit of implementing the Split-Option Switching HO feature is adaptation to enterprise wireline variabilities by absorbing the fluctuations in the radio layers.
Without implementation of the presently disclosed Split Option Switching methods and apparatus in a wireless network, the trigger on the Handover is based on the typical reading from the UE of RF signals alone. However with Split-Option switching, it not only takes the normal UE RF readings into account, it also takes into account the network conditions, the service needs, and the different split options that are available to the UEs in a particular RF footprint. Split Option switching and Normal Handover are not mutually exclusive, they can both happen. They can co-exist. When any of the criteria are met for split option switching, the split option switching Handover will take place to switch either to a different base station, or possibly using a different sub-variant architecture within the same Base Station.
The present split-option switching methods and apparatus provide further details on near real-time possibilities that could be addressed during the ongoing lifetime of a specific set of split options considering the central AP example (centrally integrated AP version) described above. It mainly considers the co-existence of option 6 and option 2. This can also be used and potentially be addressed to support option 7.2 provided the necessary network infrastructure is in place to support the high speed necessities of split option 7.2. Near real-time possibilities that could be addressed during the ongoing lifetime of specific set of split options considering the central AP example mentioned are listed below:
Split option switching can assist in traffic management in an indoor deployment and help increase the cell radius. Multi split-option APs can be intelligently placed in the wireless network. A UE moving away from the cell center towards the edge can potentially be picked up by the same CU-DU's remote RU, thus assuring continuum of traffic flow to the greatest extent possible.
Split option switching can assist in Handover while moving from indoor to outdoor and vice versa. The remote RRU of the indoor edge AP can maintain and support an intermediate switch step from where handover is initiated to the outdoor AP. That is both from the integrated to the remote RU of the indoor and from the remote RU of the indoor moving away to the outdoor, so the Remote RU acts as some type of staging AP.
HO for intra CU from low power CBSD and high power CBSD should consider reverse link BLER and SR erasures which could necessitate the split-option switching (network assisted handovers). This network architecture also supports mobility and idle mode load balancing.
Solution 1—CA/DC: CA==“Carrier Aggregation” and DC==“Dual Connectivity”. A specific use case of CA (Carrier Aggregation) is described. The carrier aggregation is adopted as an intermediate stage of a specific action triggered by the split option switching in this embodiment. Wherein, if there is an imbalance, in order to make sure there is a seamless transition of a network assisted handover (HO), it takes time to ensure that the core elements are setup for the HO. Similar to a mobile assisted handover, a network assisted handover also requires the backhaul to essentially be set up to handover to a different base station. This requires certain context creation in different services associated with the PSE. So, because this requires time to set up, one solution is to revert to Carrier Aggregation in that small period of time, because the network knows where the UE is travelling to. So, if Carrier Aggregation is performed for that particular target for the small internal time, the beam is realized. Also, by using carrier aggregation, it is ensured that no packet loss occurs. It will remain autonomous to the AP and need not be driven by external management entities like SON/RIC (“RIC” is an acronym that stands for “RAN Intelligent Controller”).
The whole idea of doing CA or DC is a decision that the AP can make and that decision is not driven by external entities. Nothing needs to inform the AP of what decision to make.
Solution 2—Mobility Load Balancing (“MLB”): The solution also introduces the concept of Mobility Load Balancing as a hybrid service with the centralized piece on the Edge. The centralized aspect of the hybrid service can be viewed as an xapp in the RIC. AI aspects can be introduced. RIC is a “RAN Intelligent Controller”—essentially, a conceptual entity that has been developed by the open end standards to monitor, or act as a “parent” to and for multiple base stations, from the point of view of real-time and non-real-time network management. It is a cloud-based network architecture, so the real-time applications that manage the network are referred to as “xx” and the non-real-time applications that manage the network are referred to as “Rx”.
This load balancing paradigm does adopt a concept of centralized service that is implemented as a part of an existing load balancing feature, but steers away from the rest of the aspects.
The software design is “agnostic” to the underlying air-interface technology as it relies on parameters that are common in both—the MLB that is described herein is “agnostic” to both 4G and 5G, the hierarchical architecture of the present methods and apparatus MLB described here can potentially be viewed as one of the benefits that helps in mobility load balancing.
Private network deployment: The network will either have “under” or “over” provisioning of radio nodes. Focusing on the indoor deployments, this scheme can be used to develop multiple solutions of managing the link budget which may not be immediately visible during planning and commissioning. Assuming either an over-provisioned scenario or an under-provisioned scenario in the deployment, RF planning is used to implement the network deployment. The present split option switching methods and apparatus can support RF planning. In one aspect, the support of RF planning is implemented by intelligently replacing the multi-integrated solutions in the network to help manage the link budget.
Focusing on the indoor deployments, the present split option switching methods and apparatus can be used to develop multiple solutions of managing the link budget which may not be immediately visible during planning and commissioning. Link budget is related to power in the network. Link budget is the radio footprint that a given base station covers, which is related to the transmit capability of the given base station. If other base stations overlap with the given base station, they can potentially interfere with that given base station and therefore reduce that given base station's “link budget”. This is affected by the number of base stations in a given deployment and their transmit power capability.
Neutral Host Deployment:
In an indoor enterprise deployment that hosts a MOCN architecture with a MNO, this scheme will potentially help in seamless handover from Enterprise to MNO networks at the edges. This is due to two aspects which have been described above when the UE transitions from the indoor network to the outdoor network. Potentially, in some embodiments, the edge AP in the enterprise could host a remote high powered RRU and switch all the split option 2 UEs on the edge AP (integrated RU) to split option 6, and hence, support better success at handover as the cell radius of edge AP would increase. The MOCN is the MNO. This covers situations where the MNO coverage does not abut the Enterprise Network coverage. This addresses both walk-in and walk-out of an enterprise neutral host for a UE. In an indoor enterprise deployment that hosts a MOCN architecture with a MNO, this scheme will potentially help in seamless handover from enterprise to MNO network at the edges.
The term “Content” means applications of the Enterprise. The router 510 routes the output of the PSE (Edge Node) to the various software applications. There are multiple Content servers that can potentially be connected with the PSE 506. The same is true of the Switch 504. A Router, such as the Router 510, can be used instead of a Switch 504. The “Cloud Server Orchestrator” (CSO), at deployment, is nominally a cloud deployment comprising a many-cloud instance for a given customer. A cloud instance may be accessible in that network where a different cloud compared to the content. “Content” comprises one or more software applications for that particular enterprise. In Enterprise deployment, and for every customer of the enterprise deployment, is essentially set up as a cloud-managed instance for every customer. Every customer gets a login, and each customer gets to set up his/her own network. The compute resource can be located anywhere.
Split option switching can help in traffic management in an indoor deployment and help increase the cell radius. Multi-split option APs can be intelligently placed in the network. A UE going away from the cell center towards the edge can potentially be picked up by the same CU-DU's remote RU, thus assuring continuum of traffic flow to the greatest extent possible. Split option switching can help in handover while moving from indoor to outdoor and vice versa. The remote RRU of the indoor edge AP can support an intermediate switch step from where handover is initiated to the outdoor AP. HO for intra CU from low power CBSD and high power CBSD should consider reverse link BLER and SR erasures. This network architecture also supports mobility and idle mode load balancing.
The flowchart 600 of
The entire paradigm of the flowchart 600 shown in
Block 602—a first box 602 of the flowchart 600 of
This ranking function is performed in block 604. A higher rank is given to the RU that is associated with the selected AP as compared with RRUs of neighboring APs. Block 604—Set up idle state rejection parameters to give higher priority to its own RU than to neighboring RUs. This block 604 allows UEs to attach via both integrated and remote RUs (RRUs).
RF measurements are then used by the UEs to determine which APs to camp on. Once they are camped on a selected AP, and at the Block 606 of the flowchart 600, the UE tracks DL and UL RF conditions, throughput and packet latency incurred.
Then, in a timely fashion (periodically), the following is checked in accordance with the flowchart 600: (a) UL issues are observed, (b) is there a Delay Critical bearer active?; and (c) Periodically check the Load and perform Load Balancing if necessary.
The top three blocks 602, 604, and 606 are a constantly running service on the BS/AP to track these aspects. Then, periodically, the following items shown in
At a Block 610 (path (1)), Up Link UL issues are periodically checked or “observed”. The UE, the RRU UE, the software knows that the UE is attached to an RRU or not. If the UE is attached to an RRU it is a simple change that basically tracks the RU that the UE is associated with for any UL issues. This can include low latency, or a low latency bearer.
If block 610 indicates that UL issues exist, the flow moves from the block 610 to the block 616 to check to see if a handover (HO) is in progress. If an HO is in progress at the block 616, then we do nothing (see block 618). At a block 620, the software (as set forth in the flowchart 600) checks to see if the UE is attached to its own low power RU. If it is, a split option switching function is performed at a block 622 to split switch from the low power RU to a high power RU. If the UE is attached to its own RU, and is having UL issues, then Split Option Switching is performed at the block 622.
If the UE is not attached to its own RU, then the flowchart keeps checking to see if UL issues are observed. This is because a Handover will eventually occur.
The next situation that is periodically checked (identified as “Path 2” in the flowchart 600) is to see if there is a Delay critical Bearer active in the block 612. Delay critical is data traffic. The flowchart 600 then checks in a block 614 to see if BLER is increasing, and if it is not, then nothing is done (block 618). If BLER is increasing, then the flowchart 600 checks to see if an HO is in progress at the block 616; and proceeds as set forth above.
As noted above, a third aspect that is periodically checked is Load Balancing.
So there are these three paths in the flowchart 600 of
These are the three Paths that the disclosed method periodically checks in order to determine whether or not to perform split-option switching.
HO Procedure from Low Power RU to High Power RU Involving Carrier Aggragation
Load Balancing is required when there is a sufficiently large imbalance of UEs camped on a selected AP. If everything were within bounds (x, y) for a selected AP, Load Balancing would be unnecessary. A need for Load Balancing have been further emphasized due to (1) the time varying nature of UE mobility; and (2) the potential incidental un-evenness of traffic in a network. The following paragraphs describe load balancing techniques and methods that can be used to implement the load balancing functions of the present Split Option Switching Methods and Apparatus. The description of these embodiments are exemplary only, and does not limit the scope of the present methods and apparatus.
The term “Load Balancing” as used in the present disclosure and as applied and described herein with reference to the present Split Option Switching methods and apparatus differs from the classical sense and use of that term, which is described as an example in a related Load Balancing application (cited and incorporated by reference hereinabove, namely, in US utility application number 17,549,603 filed Dec. 13, 2021, entitled “Load Balancing for Enterprise Deployments” (ATTY. DOCKET NO. CEL-050-PAP). As described in the related '603 application, for example, UEs report a plurality of RF measurements related to the base station (AP) they are currently camped on and in some cases related to neighboring APs. Assessments are made regarding the load of UEs borne by selected APs and a decision is made (i.e., triggered) as to whether to Handover the UE to a neighboring AP.
In contrast, in the present split option switching methods and apparatus, Load on a selected AP is simply used as another criteria that trigger a split option switch, in order to better balance the loads of UEs handled by the different split option sub-variants in the variant architectures. Load conditions are used as one transition criteria. In the '603 application above, and more traditionally, Load Balancing refers to shedding (or transitioning) UEs to neighboring cells, typically with some overlap in coverage. In contrast, in the presently disclosed split option switching methods and apparatus UEs may remain within the same cells and within the same APs (but be switched to different split option sub-variants within those APs). Here, the Load Balancing, when it occurs, keeps the UE within the same cell. It may switch to different split options within a single AP or to other APs within the same cell.
Enterprise Load Balancing
Active (or “Connected state”) and Idle state load balancing not only help in alleviating the impact of unpredictable scenarios of presence of sustained traffic across multiple users (by ensuring QoE on a per user basis to the extent possible), but also help in maintaining accessibility. Need for load balancing algorithms have been further emphasized due to the time varying nature of UE mobility and a potential of incidental un-evenness of data traffic in a deployed network. The presence of load balancing does not preclude the necessity for admission and bearer control to be setup across all access points.
The rate requirements across all users accessing a specific set of applications should be less than or equal to the cumulative set of resources set aside for rate-sensitive traffic across all the AP in that enterprise network. As noted above, the accessibility KPI (Key Performance Indicator should ensure that it remains unaffected either during post or during transition of load (translating to understanding the max rise over thermal that the AP can withstand due to this network-initiated feature). The rate of load balancing should also be considered and changed as per predictive analytics across the network.
There is a limit to the number of users in a cell that are given a certain rate of transmission. There is admission control, there is traffic control, but there is also a limit to the number of UEs allowed to have at a certain rate within a cell. There is an upper bound on load balancing from the point of view of traffic management. There is an upper bound of loads that an AP can handle and that a given cell can handle. There is an upper bound of load balancing from the point of view of traffic management.
Load balancing is over and on top of data admission and traffic control. The rate requirements across all users accessing specific set of applications should be less than or_equal to the cumulative set of resources set aside for rate sensitive traffic across all the AP in that enterprise.
As noted above, the accessibility KPI (Key Performance Indicator) is the key and load balancing algorithms should ensure that it remains unaffected either “post” or during transition of load (translating to understanding the max rise over thermal (ROT) that the AP can withstand due to this network-initiated feature).
The rate of load balancing should also be considered and changed as per predictive analytics across the network. The rate of doing load balancing is limited. Gatekeepers control this rate of load balancing. There are basically two limits that are maintained and that cannot be exceeded when performing load balancing: (1) the amount of additional traffic flows and load that can be added; and (2) the rate at which load balancing can be performed. These limits are identified by certain GATEKEEPERS. If even one Gatekeeper is satisfied, then a neighbor base station marks itself from accepting any additional load balance users (UEs). The AP declares itself as forbidden and it is subsequently placed on a “blacklist”. The AP makes it known that it is not ready to take any additional load balance (additional UEs) for the AP. The AP informs the MLB (Mobility Load Balancing) service, and the MLB service distributes this information to whatever entity requires it.
When the AP scans its Gatekeepers and determines that the Gatekeepers are no longer met, it is then available to take on additional loads (UEs). The AP removes itself from the forbidden list by informing the MLB, and the PSE controls the distribution of the blacklist, and it removes this particular AP from the forbidden list. The PSE updates the blacklist as the APs inform it of their ability to handle more load.
Load Balancing Algorithm—Actors
There is a need to maintain a centralized distribution approach for administration and operational management of load balancing. It can be perceived that centralized “active” entity sits in the edge infrastructure that not only updates its database with updates from the APs associated with the edge infrastructure but also ensures it distributes the information back to the APs. So, the potential actors for Load Balancing are the MLB (Mobile Load Balancing) service in the PSE, or the MLB module in the SON (Self-Organizing Network) service and the APs associated with the PSE. In some embodiments, the periodic information distribution would include neighbor list updates and “MLB blacklist information.” The centralized service periodically updates each AP with its neighbors, and the neighbor list update is provided in terms of the decreasing order of TX power.
Load Balancing Algorithm—Introduction to an Attraction Coefficient
The network planning approach dictates the link budget that can be afforded between a specific cell and a specific UE that is at a certain specific distance from the cell. Even if one considers a uniform distribution of UE in a network, it is not guaranteed that the distribution would be maintained across all the APs in the network as the locations and credentials (TX power, GPS coordinates) of the APs are identified. Hence, a UE would not necessarily attach to the AP that is closest to it. Rather, it will attach to the AP that the UE perceives is the strongest. This creates a necessity to create an Attraction Coefficient between a UE and an AP. This is not only dictated by what the UE perceives as strongest but also by affordability of the AP (at a specific instant) to consider the UE. Because load balancing is a routine that is initiated by the Network, the attraction coefficient must ensure the chances of idle state cell reselection or active (or “connected” state network assisted handover is increased. Hence, it is better if the attraction coefficient is based on incoming UE mobility profiles.
In a network-initiated handover paradigm, the biggest unknown is whether the HO will succeed. One approach would be to track the mobility profiles of all UE in the system to determine a likelihood of success of Handover that can be associated with every neighbor. Mobility profile will indicate that most of the UEs seem to have to come to a selected AP from a most recent hop. This last hop seems to present a better likelihood of Handover success, as profiles are periodically tracked and this neighbor has been consistently ranked high.
There are 5 kinds of HO issues that must be handled and processed to deal with more prominently in network-initiated HOs: “Early”, “late”, “ping pong”, “continuous” and perhaps “incorrect Hos”. The present methods and apparatus track the UEs' mobility profiles to prevent these errors from occurring.
A classical way of taking care of all five in a closed loop across all AP and central service is by creating a cost function that optimizes CIO, hysteresis, and time to trigger continuously. But this implicitly means that this continuous monitoring and change must also be upper bounded by a max Doppler layer that can be processed. In one embodiment of the presently disclosed split option switching methods and apparatus, a solution is independent of Doppler. The tracking metrics used by the present methods and apparatus are not dictated by changes in the UE's mobility. The tracking metrics are completely oblivious to the rate of motion (the mobility) of the UEs. This is the reason for creating and using an attraction coefficient.
Attraction Coefficient Creation
In some embodiments, the AP can potentially track UE mobility profile information on a per UE basis. This is facilitated by the presence of the UE History Information IE in the transparent source to target container during normal handover procedures. This is supported in both 4G and 5G wireless networks. This IE provides the mobility profile of every UE in the network and hence gives an indirect understanding of the neighbor cells.
In some embodiments, the following is envisioned when implementing the creation of the Attraction Coefficient in software. This information is stored in the UE context as and when any UE hands in from another DU or CU. The UE context is the context of the UE in the network. Where the UE is located, all of the stored information in the UE is kept track of when it is Handover from one CU to DU. So, essentially, it contains the current state of flows, what IP addresses and what type of radio data are located for the UE. This information is maintained both in the UE and the network. It has an ordered list of handover transitions that the UE has traversed. Periodically, the top-most Cell in the list is taken (i.e., the last cell from which the UE handed in from) of every UE.
There can be inter-radio access technology transfer that might have taken place before the UE entered the network. Therefore, 4G networks only consider EUTRAN transfers, while 5G networks only consider the NR cells. The neighbor cells are ranked based on the number of times they are seen. Information is stored in the UE context as and when the UE hands in from another DU or CU. It contains an ordered list of Handover transitions that the UE has transitioned through.
Accordingly, in some embodiments, the Attraction Coefficient is determined by taking the following steps:
Mobility Load Balancing (MLB) Forbidden Listing
The following description introduce three gatekeeper software in every AP that collectively decide whether MLB will be allowed by the AP. If the gatekeepers decide that MLB will not be allowed by the AP, the AP goes ahead and updates the centralized service about same, which, in turn, updates the other APs about the decision. This process is ongoing.
Recalculation of Bearer control upper bound is also envisioned wherein the identified bearer control is rendered useless due to ongoing air-interface situations that require excessive usage to maintain GBR (Guaranteed Bit Rate) compliance than predicted. GBR is defined herein as the minimum bit rate that is guaranteed for a flow to be established. As opposed to the Maximum bit rate (MBR) which is the MBR value. Energy which is sensed on the uplink on the base station side.
What is meant by the requirement of maintaining GBR is that this is another gatekeeper on the shifting of UEs to selected APs. If for any reason existing traffic on a selected AP is such that the selected AP cannot allow more traffic to enter, it is not allowed to enter the selected AP irrespective of the fact that the selected AP still has provisions to do so based on the controller.
In some embodiments, the gatekeepers comprise the following:
Max attached users vis-à-vis max allowed users;
GBR control upper bound and PDB compliance bound; and
ROT (Rise Over Thermal) situation.
Forbidden-Listing based on ROT
In a network it is important to understand the impact of load balancing on the ROT of the cell that is the recipient of the load offload. Rise over thermal is a ratio of received power over noise floor. It ensures stability in the cell and helps in conforming the cell to a planned coverage. It is measured in the digital side.
The noise floor can get affected by multiple factors including the neighbors. So, it is important to set up a threshold that defines the situation where the LNA can go into saturation. Despite the threshold, a high-level water mark needs to be considered for potential unknown fading and interference creators. An SNR threshold referred to as a “ROT” threshold is envisioned. The value of this threshold will be dictated by the following factors: Allowable WB-RSSI range for digital base band operation; and Allowable SFDR of the LNA. This information can be obtained from DVT reports and PA data sheets. It is also important to know the dynamic range of the LNA—what the minimum below which it cannot sense anything, and what the maximum above which the LNA outputs a standard steady nonsensical output.
In a plain vanilla base station (AP), when there is no incoming or outgoing data, there is ambient noise present in the device referred to as “noise floor”. Any energy that is received in the flow is added to the noise floor and is transferred by the LNA into the subsequent chain onto the digital side. The energy is such that it should not push the LNA into saturation. If it does push the LNA into saturation, the re-creation of energy on the other side will be stunted.
In accordance with some embodiments of the present split option switching methods and apparatus, there are two Load Balancing routines utilized to detect when Load Balancing is desired and to perform load balancing—there is an Idle State Action Load Balancing routine and an Active State Load Balancing routine. Idle states occur when a UE does not have an active connection to a neighbor AP. Active state occurs when a UE does have an active connection to a neighbor AP.
Load Balancing—Idle State Action Routine
In some embodiments, the Load Balancing Idle State Action Routine is executed according to the following steps and accordance with the flowchart 1000 shown in
Load Balancing—Connected State Action Routine
At the block 1102 of the flowchart 1100 of
In some embodiments, the Load Balancing Connected State Action Routine is executed according to the following steps:
Methods and apparatus to dynamically perform split-option switching of architectures of wireless networks based on real-time and non-real-time measurements and inputs wherein, the split-option architectures are switched to optimize user equipment (UE) experiences and network performance have been disclosed.
Although the disclosed method and apparatus is described above in terms of various examples of embodiments and implementations, it should be understood that the particular features, aspects and functionality described in one or more of the individual embodiments are not limited in their applicability to the particular embodiment with which they are described. Thus, the breadth and scope of the claimed invention should not be limited by any of the examples provided in describing the above disclosed embodiments.
Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. As examples of the foregoing: the term “including” should be read as meaning “including, without limitation” or the like; the term “example” is used to provide examples of instances of the item in discussion, not an exhaustive or limiting list thereof; the terms “a” or “an” should be read as meaning “at least one,” “one or more” or the like; and adjectives such as “conventional,” “traditional,” “normal,” “standard,” “known” and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. Likewise, where this document refers to technologies that would be apparent or known to one of ordinary skill in the art, such technologies encompass those apparent or known to the skilled artisan now or at any time in the future.
A group of items linked with the conjunction “and” should not be read as requiring that each and every one of those items be present in the grouping, but rather should be read as “and/or” unless expressly stated otherwise. Similarly, a group of items linked with the conjunction “or” should not be read as requiring mutual exclusivity among that group, but rather should also be read as “and/or” unless expressly stated otherwise. Furthermore, although items, elements or components of the disclosed method and apparatus may be described or claimed in the singular, the plural is contemplated to be within the scope thereof unless limitation to the singular is explicitly stated.
The presence of broadening words and phrases such as “one or more,” “at least,” “but not limited to” or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent. The use of the term “module” does not imply that the components or functionality described or claimed as part of the module are all configured in a common package. Indeed, any or all of the various components of a module, whether control logic or other components, can be combined in a single package or separately maintained and can further be distributed in multiple groupings or packages or across multiple locations.
Additionally, the various embodiments set forth herein are described with the aid of block diagrams, flow charts and other illustrations. As will become apparent to one of ordinary skill in the art after reading this document, the illustrated embodiments and their various alternatives can be implemented without confinement to the illustrated examples. For example, block diagrams and their accompanying description should not be construed as mandating a particular architecture or configuration.
This application is a Continuation-in-Part (CIP) application of parent non-provisional application Ser. No. 17/842,686 filed Jun. 16, 2022, entitled “Split Option Switching Methods and Apparatus” (ATTY. DOCKET NO. CEL-057-PAP) and claims priority to the above-cited parent application Ser. No. 17/842,686; and this CIP application also claims priority to earlier-filed provisional application No. 63/328,199 filed Apr. 6, 2022, entitled “Split Option Switching Methods and Apparatus” (ATTY. DOCKET NO. CEL-057-PROV); and this CIP application also claims priority to earlier-filed provisional application No. 63/337,001 filed Apr. 29, 2022, entitled “Split Option Switching Methods and Apparatus” (ATTY DOCKET NO. CEL-057-PROV-2); and this CIP application is also related to US utility application number 17,549,603 (non-provisional application) filed Dec. 13, 2021, entitled “Load Balancing for Enterprise Deployments” (ATTY. DOCKET NO. CEL-050-PAP); and the contents of the above-cited earlier-filed provisional applications (App. No. 63/328,199 filed Apr. 6, 2022 and App. No. 63/337,001 filed Apr. 29, 2022), the parent non-provisional application Ser. No. 17/842,686 filed Jun. 16th 2022, and the non-provisional application number 17,549,603, filed Dec. 13, 2021, are all hereby incorporated by reference herein as if set forth in full.
Number | Date | Country | |
---|---|---|---|
63328199 | Apr 2022 | US | |
63337001 | Apr 2022 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17842686 | Jun 2022 | US |
Child | 17852041 | US |