The teachings in accordance with the exemplary embodiments of this invention relate generally to perform machine learning based unnecessary handover avoidance and, more specifically, relate to an machine learning based technique to dynamically predict and tune a Ping-Pong Offset (PPOffset) after a handover in order to avoid a ping-pong handover back from a current serving cell/beam to a previous serving cell/beam.
This section is intended to provide a background or context to the invention that is recited in the claims. The description herein may include concepts that could be pursued, but are not necessarily ones that have been previously conceived or pursued. Therefore, unless otherwise indicated herein, what is described in this section is not prior art to the description and claims in this application and is not admitted to be prior art by inclusion in this section.
Certain abbreviations that may be found in the description and/or in the Figures are herewith defined as follows:
Handover (HO) related Key Performance Indicators (KPIs) for mobility robustness optimization (MRO) in cellular mobile communications. MRO algorithms are well-known methods for optimizing mobility parameters to improve mobility performance, e.g., minimize mobility-related failures and unnecessary handovers. The common approach in MRO algorithms is to optimize the Cell Individual Offset (CIO) and Time-to-Trigger (TTT), i.e., the key parameters in controlling the HO procedure initiation. The network can control the handover procedure between any cell pair in the network by defining different CIO and TTT values.
Example embodiments of the invention work to improve upon at least these features for determination of handover requirements.
In an example aspect of the invention, there is an apparatus, such as a user equipment side apparatus, comprising: at least one processor, and at least one non-transitory memory storing instructions that, when executed by the at least one processor, cause the apparatus at least to; receive from a network node of the communication network a ping-pong offset prediction request message; determine a ping-pong offset prediction, wherein the ping-pong offset prediction is taking into account a change in at least one of a speed, trajectory, or received signal levels from a previous serving cell; and based on the determining, send towards the network node the ping-pong offset prediction, wherein based on the ping-pong offset prediction a handover back to the previous serving cell is one of executed or not executed.
In another example aspect of the invention, there is a method comprising: receiving from a network node of the communication network a ping-pong offset prediction request message; determining a ping-pong offset prediction wherein the ping-pong offset prediction is taking into account a change in at least one of a speed, trajectory, or received signal levels from a previous serving cell; and based on the determining, sending towards the network node the ping-pong offset prediction, wherein based on the ping-pong offset prediction a handover back to the previous serving cell is one of executed or not executed.
A further example embodiment is an apparatus and a method comprising the apparatus and the method of the previous paragraphs, wherein the ping-pong offset prediction is taking into account at least the received signal levels from the previous serving cell as input to a machine learning pre-trained model to perform prediction for an optimal ping-pong offset, wherein the ping-pong offset prediction determination is repeated periodically, wherein the ping-pong offset prediction determination at the user equipment is repeated periodically upon notification from the communication network, wherein the optimal ping-pong offset is to be used as part of handover measurement reporting for triggering at least one of an A3 event based or layer1/layer2 mobility handover or a conditional handover, wherein the ping-pong offset prediction is sent to the network node as part of layer 1 measurement reporting to the communication network, wherein the ping-pong offset prediction is identifying a value for one of preventing or delaying the apparatus from executing a handover to the previous serving cell, wherein the ping-pong offset prediction as part of a handover decision to trigger executing or not executing the handover back toward the previous serving cell, wherein the handover decision is triggered upon evaluating whether a previous serving cell power exceeds a current serving cell power plus a ping-pong offset value or not, and/or wherein the receiving is based on determining that a handover of the apparatus from a serving cell back to the previous serving cell is to be executed.
A non-transitory computer-readable medium storing program code, the program code executed by at least one processor to perform at least the method as described in the paragraphs above.
In another example aspect of the invention, there is an apparatus comprising: means for receiving from a network node of the communication network a ping-pong offset prediction request message; determining a ping-pong offset prediction, wherein the ping-pong offset prediction is taking into account a change in at least one of a speed, trajectory, or received signal levels from a previous serving cell; and means, based on the determining, for sending towards the network node a ping-pong offset prediction, wherein based on the ping-pong offset prediction a handover back to the previous serving cell is one of executed or not executed.
In the example aspect ofthe invention according to the paragraph above, wherein at least the means for determining, receiving, sending, and executing or not executing comprises a non-transitory computer readable medium encoded with a computer program executable by at least one processor.
In an example aspect of the invention, there is an apparatus, such as a network side apparatus, comprising: at least one processor, and at least one non-transitory memory storing instructions that, when executed by the at least one processor, cause the apparatus at least: send towards the user equipment a ping-pong offset prediction request message to enable ping-pong offset prediction at the user equipment; based on the sending, receive from the user equipment a ping-pong offset prediction, wherein the ping-pong offset prediction is taking into account a change in at least one of a speed, trajectory, or received signal levels from a previous serving cell; and based on the ping-pong offset prediction, determine to one of execute or not execute a handover back to the previous serving cell.
In another example aspect of the invention, there is a method comprising: sending towards the user equipment a ping-pong offset prediction request message to enable ping-pong offset prediction at the user equipment; based on the sending, receiving from the user equipment a ping-pong offset prediction, wherein the ping-pong offset prediction is taking into account a change in at least one of a speed, trajectory, or received signal levels from a previous serving cell; and based on the received ping-pong offset prediction, determining to one of execute or not execute a handover back to the previous serving cell.
A further example embodiment is an apparatus and a method comprising the apparatus and the method of the previous paragraphs, wherein the ping-pong offset prediction request message is to enable a machine learning based ping-pong offset prediction at the user equipment, wherein the ping-pong offset prediction is received by the network node as part of layer 1 measurement reporting, wherein the at least one non-transitory memory storing instructions that when executed by the at least one processor cause the apparatus at least to: use the ping-pong offset prediction to evaluate the handover; and prepare a medium access control control element (MAC CE) command for executing the handover, wherein the apparatus uses the ping-pong offset prediction as part of a handover decision to trigger executing or not executing the handover back toward the previous serving cell, wherein the ping-pong offset prediction is utilized as part of a handover decision rule to evaluate whether a previous serving cell power exceeds a current serving cell power plus a ping-pong offset value or not, wherein the user equipment uses the ping-pong offset prediction as part of a handover decision to trigger executing or not executing the handover back toward the serving cell, and/or wherein the sending is based on determining a handover of the user equipment from a serving cell back to the previous serving cell is to be executed.
A non-transitory computer-readable medium storing program code, the program code executed by at least one processor to perform at least the method as described in the paragraphs above.
In another example aspect of the invention, there is an apparatus comprising: means for sending towards the user equipment a ping-pong offset prediction request message to enable ping-pong offset prediction at the user equipment; means, based on the sending, for receiving from the user equipment a ping-pong offset prediction, wherein the ping-pong offset prediction is taking into account a change in at least one of a speed, trajectory, or received signal levels from a previous serving cell; and means, based on the ping-pong offset prediction, for determining to one of execute or not execute a handover back to the previous serving cell.
In the example aspect of the invention according to the paragraph above, wherein at least the means for determining, receiving, sending, and executing or not executing comprises a non-transitory computer readable medium encoded with a computer program executable by at least one processor.
The above and other aspects, features, and benefits of various embodiments of the present disclosure will become more fully apparent from the following detailed description with reference to the accompanying drawings, in which like reference signs are used to designate like or equivalent elements. The drawings are illustrated for facilitating better understanding of the embodiments of the disclosure and are not necessarily drawn to scale, in which:
In this invention, there is proposed at least methods and apparatus to perform machine learning based unnecessary handover Avoidance.
Different CIO and T configurations are needed for mobile terminals with different speeds. The faster the terminals are, the sooner the handover procedure must be started. This goal is achieved by either increasing the CIO (i.e., the offset between the measured signal power of the serving cell and the target cell) or decreasing the TTT (i.e., the interval, during which the trigger requirement is fulfilled). In contrast, in the cell boundaries dominated by slow users, the handover procedures are started relatively later by choosing the lower values for the CIO or higher TT.
It is worth noting that changing the CIOs rather than TTTs is the preferred approach in practice. Whereas the speed of the mobile terminals plays an obvious role, it is not the only criteria. Slow mobile terminals may also be at risk (requiring earlier handovers) when moving through areas with significant propagation changes (e.g., very steep shadowing slope). Fast mobile terminals may not be at risk when moving through areas with little propagation changes (e.g., flat shadowing slopes).
Hence, even if velocity could be instantaneously estimated with enough accuracy (which is extremely challenging or even impossible), velocity-based methods would not always react correctly. Nevertheless, we will use the intuitive example of speed in the following for better illustration, i.e., “slow” refers to uncritical terminals which are not under failure risk but may still suffer Ping-Pong's), and “fast” refers to the as critical terminal which is at failure risk.
A configured event triggers the UE to send a measurement report. Based on this report, the source node can prepare one or more target cells in the same target node, or multiple target nodes for the (conditional) handover (CHO Request+CHO Request Acknowledge) and then sends an RRC (Radio Resource Control) Reconfiguration. The mobility-related failures can be classified into four categories.
Too Early (CE) handover failures: This type of failure happens when the UE handovers to target cell before the link quality of the target cell is not good enough. In one example, when the A3 entry condition has been met, the TTT timer expires, and UE performs the handover procedure. However, shortly after the handover, it experiences Radio Link Failure (RLF). In these cases, it is apparent that the handover procedure should have started relatively later. Hence, the MRO reduces the related CIO value. Another example of a too early initiated handover is the expiry of the timer T304, also called “Handover Failure”. This happens, when the target cell is not good enough, such that even the Random Access Channel (RACH) is not successful.
Too Late (TL) handover failures: in this type of failures, either UE did not even send out a measurement report (e.g., since the TT timer did not expire before the RLF), or the measurement report or the handover command got lost due to degrading channel conditions, and thus the UE has not started the handover procedure. The solution for eliminating these failures is to start the handover relatively sooner, hence, the MRO increases the related CIO.
Ping-pong (PP) handovers failures refer to cases that the UE hands over to the target cell but shortly after it has to handover back to the source cell. This case is usually considered as another form of TE handover.
Wrong Cell (WC) Handover failures: radio link failure occurs in the target cell shortly after a handover has been completed, and the UE attempts to re-establish its radio link in a cell which is neither the source cell nor the target cell. Alternatively, the timer T304 expires during the handover procedure (i.e., “Handover failure”), and the UE attempts to re-establish its radio link in a cell which is neither the source cell nor the target cell. A3 Even based Handover Current handover mechanisms are reactive as it is shown in
Execution of HO is delayed due to TIT (e.g. 200-300 ms), offset (e.g. 1-3 dB), and signaling delays. On the other hand, shorter TT and smaller offset may lead to too early triggering and/or triggering HO to suboptimal target. Mobility Robustness Optimization, tries to adjust HO parameters based on too early/too late handovers, but problem may be more complicated
In
Current AI/ML Framework in 3GPP The machine learning provides extremely useful and valuable tools to handle the increasing complexity and improve the performance of wireless access networks (5G and beyond). Several studies and proof of concepts have already proven the efficiency of Machine learning in different use cases such as mobility optimization, Scheduling Beamforming in Massive MIMO Networks, Indoor Positioning and configuration of Uplink and Downlink Channels.
To enable the introduction of Machine Learning into the RAN, the standard is defining the functional framework including the different interfaces, entities and functions to provide all the necessary means for integrating AI/ML methods. A functional framework for RAN intelligence study includes the AI functionality and the inputs and outputs needed by an ML algorithm. Specifically, the study aims to identify the data needed by an AI function in the input and the data that is produced in the output, as well as the standardization impacts at a node in the existing architecture or in the network interfaces to transfer this input/output data through them. Such discussions will continue during related specification for standards.
As shown in
Examples of input data may include measurements from UEs or different network entities, feedback from Actor, output from an A/IL model:
Model Training is a function that performs the AI/ML model training, validation, and testing which may generate model performance metrics as part of the model testing procedure. The Model Training function is also responsible for data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) based on Training Data delivered by a Data Collection function, if required:
Model Inference is a function that provides AI/ML model inference output (e.g., predictions or decisions). Model Inference function may provide Model Performance Feedback to Model Training function when applicable. The Model Inference function is also responsible for data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) based on Inference Data delivered by a Data Collection function, if required:
Actor is a function that receives the output from the Model Inference function and triggers or performs corresponding actions. The Actor may trigger actions directed to other entities or to itself.
Feedback: Information that may be needed to derive training data, inference data or to monitor the performance of the AI/ML Model and its impact to the network through updating of KPIs and performance counters.
NOTE: The functional framework discussed above is only in the context of the network scope i.e., RAN3 interfaces. Although some aspects of this functional framework serve as a reference for the currently ongoing RAN1 study item (i.e., AI/ML in the air interface), the framework for the gNB and UE collaboration requires fresh discussions.
Different use cases have been considered so far, e.g., load balancing, energy saving, mobility optimization.
Information that may be needed to derive training data, inference data or to monitor the performance of the AI/ML Model and its impact to the network through updating of KPIs and performance counters. NOTE: The functional framework discussed above is only in the context of the network scope i.e., RAN3 interfaces. Although some aspects of this functional framework serve as a reference for the currently ongoing RAN1 study item (i.e., AI/ML in the air interface), the framework for the gNB and UE collaboration requires fresh discussions.
Different use cases have been considered so far, e.g., load balancing, energy saving, mobility optimization.
Prediction has been widely discussed as a method to improve performance. For example, prediction of UE trajectory or future location can be useful to adjust HO thresholds e.g., cell individual offset or to select the RNA in RRC-INACTIVE. Prediction of UE location could further help network resource allocation for various use cases including energy saving, load balancing and mobility management. As another example, HO decisions can be improved by using prediction information on the UE performance at the target cell. Energy saving decisions taken locally at a cell could be improved by utilizing prediction information on incoming UE traffic, as well as prediction information about traffic that may be offloaded from a candidate energy saving cell to a neighbor cell ensuring coverage.
User mobility optimization is one of the key success toward better radio communication systems, such as 5G and 6G. Mobility optimization could be achieved by optimizing HO to correct target cells/beams and at the correct time, avoiding falling RLF, or through avoiding unnecessary HO which may result in too many Ping-pongs between the serving cell and the target cell and vice-versa. Several mobility optimization techniques include AI/ML based show a high success rate in optimizing HOs (target and timing) and reducing RLF, however, all techniques achieve their targets on the cost of too many Ping-pongs as in ML based L3 Handover predictions or L1/L2 (LLM) based mobility techniques. Ping-pongs are costly in terms of outage such as at L3 level mobility. Even L1/L2 based ping pongs has little outage cost per ping-pong, but still too many occurrences will accumulate to a significant amount of system outage and wasting system resources which must be avoided.
Not all Ping-pongs am unnecessary and some are a must do to avoid RLF due to coverage hole for example. This makes avoiding unnecessary Ping-pongs even harder problem to solve.
In example embodiments of the invention there is proposed at least an ML based technique that allows to dynamically (optional: periodically) predict and tune an extra penalty (Ping-Pong Offset (PPOffset)) over the old serving cell/beam after a handover in order to avoid ping-pong between the current serving cell/beam and the old one.
Before describing the example embodiments of the invention in detail, reference is made to
The NN 12 (NR/5G Node B or possibly an evolved NB or any other similar type of NW node) is a base station such as a master or secondary node base station (e.g., for NR or LTE long term evolution) that communicates with devices such as NN 13 and UE 10 of
The NN 13 can comprise a mobility function device such as an AMF or SMF, further the NN 13 may comprise a NR/5G Node B or possibly an evolved NB a base station such as a master or secondary node base station (e.g., for NR or LTE long term evolution) that communicates with devices such as the NN 12 and/or UE 10 and/or the wireless network 1. The NN 13 includes one or more processors DP 13A, one or more memories MEM 13B, one or more network interfaces, and one or more transceivers TRANS 13D interconnected through one or more buses 13E. In accordance with the example embodiments these network interfaces of NN 13 can include X2 and/or Xn interfaces for use to perform the example embodiments of the invention. Each of the one or more transceivers TRANS 13D includes a receiver and a transmitter connected to one or more antennas. The one or more memories MEM 13B include computer program code PROG 13C. For instance, the one or more memories MEM 13B and the computer program code PROG 13C are configured to cause, with the one or more processors DP 13A, the NN 13 to perform one or more of the operations as described herein. The NN 13 may communicate with another mobility function device and/or eNB such as the NN 12 and the UE 10 or any other device using, e.g., wireless link 11, wireless link 14, or another link. These links maybe wired or wireless or both and may implement, e.g., an X2 or Xn interface. Further, as stated above the wireless link 11 or wireless link 14 may be through other network devices such as, but not limited to an NCEFMMSGW device such as the NCE/MME/SGW 14 of
The one or more buses 10E, 12E, and/or 13E of the devices of
It is noted that although
Also it is noted that description herein indicates that “cells” perform functions, but it should be clear that the gNB that forms the cell and/or a user equipment and/or mobility management function device that will perform the functions. In addition, the cell makes up part of a gNB, and there can be multiple cells per gNB.
The wireless network 1 may include a network control element (NCE/MME/SGW) 14 that may include NCE (Network Control Element), MME (Mobility Management Entity)/SGW (Serving Gateway) functionality, and which provides connectivity with a further network, such as a telephone network and/or a data communications network (e.g., the Internet). The NN 12 and the NN 13 are coupled via a link 13 and/or link 14 to the NCE/MME/SGW 14. In addition, it is noted that the operations in accordance with example embodiments of the invention, as performed by the NN 13, may also be performed at the NCE/MME/SGW 14.
The NCF/MMFISGW 14 includes one or more processors DP 14A, one or more memories MEM 14B, and one or more network interfaces (N/W I/F(s)), interconnected through one or more buses coupled with the link 13 and/or 14. In accordance with the example embodiments these network interfaces can include X2 and/or Xn interfaces for use to perform the example embodiments of the invention. The one or more memories MEM 14B include computer program code PROG 14C. The one or more memories MEM14B and the computer program code PROG 14C are configured to, with the one or more processors DP 14A, cause the NCE/MMESGW 14 to perform one or more operations which may be needed to support the operations in accordance with the example embodiments of the invention. [w68] The wireless Network 1 may implement network virtualization, which is the process of combining hardware and software network resources and network functionality into a single, software-based administrative entity, a virtual network. Network virtualization involves platform virtualization, often combined with resource virtualization. Network virtualization is categorized as either external, combining many networks, or parts of networks, into a virtual unit, or internal, providing network-like functionality to software containers on a single system. Note that the virtualized entities that result from the network virtualization are still implemented, at some level, using hardware such as processors DP10A, DP12A, DP13A, and/or DP14A and memories MEM 10B, MEM 12B, MEM 13B, and/or MEM 14B, and also such virtualized entities create technical effects.
The computer readable memories MEM 10B; MEM 12B, MEM 13B, and MEM 14B may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor based memory devices, flash memory, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory. The computer readable memories MEM 10B, MEM 12B, MEM 13B, and MEM 14B may be means for performing storage functions. The processors DP10A, DP12A, DP13A, and DP14A may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs) and processors based on a multi-core processor architecture, as non-limiting examples. The processors DP10A, DP12A, DP13A, and DP14A may be means for performing functions, such as controlling the UE 10, NN 12, NN 13, NCE/MME/SGW 14 and other functions as described herein.
As similarly stated above, in example embodiments of the invention there is proposed at least an ML based technique that allows to dynamically (optional: periodically) predict and tune an extra penalty (Ping-Pong Offset (PPOffset)) over the old serving cell/beam after a handover in order to avoid ping-pong between the current serving cell/beam and the old one. The ML based technique takes into account the UE trajectory, speed and the received signal levels (e.g., RSRP) from the old serving cell/beam as input and produces a PPOffset as an output. This PPOffset will be used only for assessing the HO back toward the old serving cell in the following way as examples but not limited to:
The PPOffset prediction will be enabled at the UE side upon indication from the network side.
In another embodiment, the network may indicate a prediction periodicity to be followed by the UE.
The UE will perform the prediction and send a feedback to the network for verification.
In example embodiments of the invention there is proposed ML based signalling and technique to minimize the unnecessary HO triggering between the new serving cell/beam and the old one (Ping-pongs).
Both figures show the case when a HO is triggered from gNB1 to gNB2 in
At least some inventive step in accordance with example embodiments of the invention are shown in
A3 event based HO use case
In order to avoid the unnecessary HO back to old serving cell, in example embodiments of the invention there is proposed the following:
1. Embodiment-1: Indication from the network to UE side requesting the use of the ML based PPOffset prediction as special penalty to be used when evaluating the HO back to the old serving cell only:
L1/L2 Mobility (LLM) HO use case
6. Embodiment-5: Indication from the network to UE side requesting the use of the ML based PPOffset prediction as special penalty to be used when evaluating the HO back to the old serving cell only.
ML Model at the User Side (An example but not limited to)
For our use cases we can use a supervised machine learning model to do a regression task, by which the ML model will use the radio cells measurements from the older and new serving cells, the UE's trajectory and speed information as input to the ML regression model and the model shall output a float number representing the optimal offset (PPOffset) used to avoid unnecessary HO back toward the old serving cell.
For ground truth labeling, we can take the following rules into account:
1. PPOffset level shall go lower with the period of time the UE is spending in the new serving cell. PPOffset will be set to zero if period of time >predefined time limit (e.g., 500 milliseconds in A3 event based HO triggering use case) and vice versa.
2. PPOffset level shall go higher with UE trajectory/speed going away from old serving cell and vice-versa.
The proposed multi-label classification model could be designed (but not limited to) using neural network of:
In accordance with the example embodiments as described in the paragraph above, wherein the ping-pong offset prediction is taking into account at least the received signal levels from the previous serving cell as input to a machine learning pre-trained model to perform prediction for an optimal ping-pong offset.
In accordance with the example embodiments as described in the paragraphs above, wherein the ping-pong offset prediction determination is repeated periodically.
In accordance with the example embodiments as described in the paragraphs above, wherein the ping-pong offset prediction determination at the user equipment is repeated periodically upon notification from the communication network.
In accordance with the example embodiments as described in the paragraphs above, wherein the optimal ping-pong offset is to be used as part of handover measurement reporting for triggering at least one of an A3 event based or layer1/layer2 mobility handover or a conditional handover.
In accordance with the example embodiments as described in the paragraphs above, wherein the ping-pong offset prediction is sent to the network node as part of layer 1 measurement reporting to the communication network.
In accordance with the example embodiments as described in the paragraphs above, wherein the ping-pong offset prediction is identifying a value for one of preventing or delaying the apparatus from executing a handover to the previous serving cell.
In accordance with the example embodiments as described in the paragraphs above, wherein the ping-pong offset prediction as part of a handover decision to trigger executing or not executing the handover back toward the previous serving cell.
In accordance with the example embodiments as described in the paragraphs above, wherein the handover decision is triggered upon evaluating whether a previous serving cell power exceeds a current serving cell power plus a ping-pong offset value or not.
In accordance with the example embodiments as described in the paragraphs above, wherein the receiving is based on determining that a handover of the apparatus from a serving cell back to the previous serving cell is to be executed.
A non-transitory computer-readable medium (MEM 10B as in
In accordance with an example embodiment of the invention as described above there is an apparatus comprising: means for receiving (TRANS 10D, MEM 10B, PROG 10C, and DP 10A as in
In the example aspect of the invention according to the paragraph above, wherein at least the means for receiving, sending, taking into account, and executing or not executing comprises a non-transitory computer readable medium [MEM 10B as in
In accordance with the example embodiments as described in the paragraph above, wherein the ping-pong offset prediction request message is to enable a machine learning based ping-pong offset prediction at the user equipment.
In accordance with the example embodiments as described in the paragraphs above, wherein the ping-pong offset prediction is received by the network node as part of layer 1 measurement reporting.
In accordance with the example embodiments as described in the paragraphs above, wherein there is using the ping-pong offset prediction to evaluate the handover; and prepare a medium access control element command for executing the handover.
In accordance with the example embodiments as described in the paragraphs above, wherein the apparatus uses the ping-pong offset prediction as part of a handover decision to trigger executing or not executing the handover back toward the previous serving cell.
In accordance with the example embodiments as described in the paragraphs above, wherein the ping-pong offset value is utilized as part of a handover decision rule to evaluate whether a previous serving cell power exceeds a current serving cell power plus a ping-pong offset value or not.
In accordance with the example embodiments as described in the paragraphs above, wherein the user equipment uses the ping-pong offset prediction as part of a handover decision to trigger executing or not executing the handover back toward the serving cell.
In accordance with the example embodiments as described in the paragraphs above, the network either uses the offset directly as part of the HO decision or use it as part of the L1/L2 or CHO condition that will be sent to the UE to decide the HO execution to its side.
In accordance with the example embodiments as described in the paragraphs above, wherein the sending is based on determining a handover of the user equipment from a serving cell back to the previous serving cell is to be executed.
A non-transitory computer-readable medium (MEM 12B and/or MEM 13B as in
In accordance with an example embodiment of the invention as described above there is an apparatus comprising: means for sending (TRANS 12D and/or TRANS 13D, MEM 12B and/or MEM 13B, PROG 12C and/or PROG. 13C, and DP 12A and/or DP 13A as in
In the example aspect of the invention according to the paragraph above, wherein at least the means for determining, receiving, sending, and executing or not executing comprises a non-transitory computer readable medium [MEM 12B and/or MEM 13B as in
Further in accordance with example embodiments of the invention there is performing operations which may be performed by a device such as, but not limited to, a device such as an NN 12 and/or NN 13 as in
In accordance with the example embodiments as described in the paragraphs above, wherein the ping-pong offset prediction is taking into account at least the received signal levels from the previous serving cell as input to a machine learning pre-trained model to perform prediction for an optimal ping-pong offset.
In accordance with the example embodiments as described in the paragraphs above, wherein the ping-pong offset prediction is sent to the network node as part of layer 1 measurement reporting
A non-transitory computer-readable medium (MEM 12B and/or MEM 13B as in
In accordance with an example embodiment of the invention as described above there is an apparatus comprising: means for receiving ( ) from a network node (NN 12 and/or NN 13 as in
Further, in accordance with example embodiments of the invention there is circuitry for performing operations in accordance with example embodiments of the invention as disclosed herein. This circuitry can include any type of circuitry including content coding circuitry, content decoding circuitry, processing circuitry, image generation circuitry, data analysis circuitry, etc.). Further, this circuitry can include discrete circuitry, application-specific integrated circuitry (ASIC), and/or field-programmable gate array circuitry (FPGA), etc. as well as a processor specifically configured by software to perform the respective function, or dual-core processors with software and corresponding digital signal processors, etc.). Additionally, there are provided necessary inputs to and outputs from the circuitry, the function performed by the circuitry and the interconnection (perhaps via the inputs and outputs) of the circuitry with other components that may include other circuitry in order to perform example embodiments of the invention as described herein.
In accordance with example embodiments of the invention as disclosed in this application this application, the “circuitry” provided can include at least one or more or all of the following:
In general, the various embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. For example, some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
Embodiments of the inventions may be practiced in various components such as integrated circuit modules. The design of integrated circuits is by and large a highly automated process. Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.
The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments. All of the embodiments described in this Detailed Description are exemplary embodiments provided to enable persons skilled in the art to make or use the invention and not to limit the scope of the invention which is defined by the claims.
The foregoing description has provided by way of exemplary and non-limiting examples a full and informative description of the best method and apparatus presently contemplated by the inventors for carrying out the invention. However, various modifications and adaptations may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings and the appended claims. However, all such and similar modifications of the teachings of this invention will still fall within the scope of this invention.
It should be noted that the terms “connected,” “coupled,” or any variant thereof, mean any connection or coupling, either direct or indirect, between two or more elements, and may encompass the presence of one or more intermediate elements between two elements that are “connected” or “coupled” together. The coupling or connection between the elements can be physical, logical, or a combination thereof. As employed herein two elements may be considered to be “connected” or “coupled” together by the use of one or more wires, cables and/or printed electrical connections, as well as by the use of electromagnetic energy, such as electromagnetic energy having wavelengths in the radio frequency region, the microwave region and the optical (both visible and invisible) region, as several non-limiting and non-exhaustive examples.
Furthermore, some of the features of the preferred embodiments of this invention could be used to advantage without the corresponding use of other features. As such, the foregoing description should be considered as merely illustrative of the principles of the invention, and not in limitation thereof.