Aspects of the present disclosure relate to wireless communications, and more particularly, to techniques for implementing machine learning and/or artificial intelligence aspects in a radio access network (RAN).
Wireless communications systems are widely deployed to provide various telecommunication services such as telephony, video, data, messaging, broadcasts, or other similar types of services. These wireless communications systems may employ multiple-access technologies capable of supporting communications with multiple users by sharing available wireless communications system resources with those users.
Although wireless communications systems have made great technological advancements over many years, challenges still exist. For example, complex and dynamic environments can still attenuate or block signals between wireless transmitters and wireless receivers. Accordingly, there is a continuous desire to improve the technical performance of wireless communications systems, including, for example: improving speed and data carrying capacity of communications, improving efficiency of the use of shared communications mediums, reducing power used by transmitters and receivers while performing communications, improving reliability of wireless communications, avoiding redundant transmissions and/or receptions and related processing, improving the coverage area of wireless communications, increasing the number and types of devices that can access wireless communications systems, increasing the ability for different types of devices to intercommunicate, increasing the number and type of wireless communications mediums available for use, and the like. Consequently, there exists a need for further improvements in wireless communications systems to overcome the aforementioned technical challenges and others.
One aspect provides a method of wireless communication by a first network entity. The method includes providing, to a second network entity, an indication of cross-node machine learning information used for a cross-node machine learning session between the first network entity and a user equipment (UE); obtaining machine learning information associated with the UE; and controlling the cross-node machine learning session based at least in part on the machine learning information.
Another aspect provides a method of wireless communication by a first network entity. The method includes obtaining, from a second network entity, an indication of cross-node machine learning information used for a cross-node machine learning session between the second network entity and a UE; providing, to the UE, a configuration for the cross-node machine learning session based at least in part on the cross-node machine learning information; obtaining machine learning information associated with the UE; providing, to the second network entity, the machine learning information; obtaining, from the second network entity, output data generated from the machine learning information; and communicating with the UE based at least in part on the output data.
Another aspect provides a method for wireless communications by an apparatus (e.g., a user equipment). The method includes providing, to a first network entity, capability information associated with a cross-node machine learning session between the apparatus and a second network entity; obtaining, from the first network entity, a configuration for the cross-node machine learning session; and communicating with the second network entity in accordance with the configuration for the cross-node machine learning session.
Other aspects provide: one or more apparatuses operable, configured, or otherwise adapted to perform any portion of any method described herein (e.g., such that performance may be by only one apparatus or in a distributed fashion across multiple apparatuses); one or more non-transitory, computer-readable media comprising instructions that, when executed by one or more processors of one or more apparatuses, cause the one or more apparatuses to perform any portion of any method described herein (e.g., such that instructions may be included in only one computer-readable medium or in a distributed fashion across multiple computer-readable media, such that instructions may be executed by only one processor or by multiple processors in a distributed fashion, such that each apparatus of the one or more apparatuses may include one processor or multiple processors, and/or such that performance may be by only one apparatus or in a distributed fashion across multiple apparatuses); one or more computer program products embodied on one or more computer-readable storage media comprising code for performing any portion of any method described herein (e.g., such that code may be stored in only one computer-readable medium or across computer-readable media in a distributed fashion); and/or one or more apparatuses comprising one or more means for performing any portion of any method described herein (e.g., such that performance would be by only one apparatus or by multiple apparatuses in a distributed fashion). By way of example, an apparatus may comprise a processing system, a device with a processing system, or processing systems cooperating over one or more networks.
The following description and the appended figures set forth certain features for purposes of illustration.
The appended figures depict certain features of the various aspects described herein and are not to be considered limiting of the scope of this disclosure.
Aspects of the present disclosure provide apparatuses, methods, processing systems, and computer-readable mediums for performing cross-node machine learning (ML) and/or artificial intelligence (AI) operations in a radio access network (RAN).
In certain cases, a wireless communications system (e.g., a wireless wide area network (WWAN) including, for example, 5G New Radio and/or future WWAN systems) may employ AI/ML to perform any of various wireless communication operations, such as channel state information estimation, beam management, device positioning, etc. As an example, a radio access network entity (e.g., a base station including a disaggregated base station as further described herein) and a user equipment (UE) may apply paired or distributed AI/ML model(s) over which a joint inference may be used among the network entity and the UE. A joint inference may use an AI/ML model that is shared among certain entities in a wireless communication system, such as a UE and a base station. In some cases, a network entity (e.g., a base station) may perform certain AI/ML computations based at least in part on AI/ML input obtained from the UE (e.g., decoding or decompression of AI/ML-based feedback or input from the UE). However, the AI/ML processing performed at the network entity, such as a base station and/or disaggregated entities thereof, may be computationally intensive.
Technical problems for AI/ML-based wireless communications include, for example, the computational resources used at a network entity for AI/ML processing. The AI/ML processing may use computational resources (e.g., processing and/or storage) that could be used for other operations (e.g., scheduling and/or managing wireless communications), especially when a base station is tasked with managing the communication links for multiple UEs and/or multiple ML functions or models for one or more UEs. In some cases, the AI/ML processing may consume the processing capabilities of the base station to perform certain network functions, such as scheduling and/or wireless communications management, within a particular performance specification (e.g., a specified latency), or vice versa. In certain cases, deploying additional computational resources to base stations for AI/ML processing may be a costly endeavor for radio access network (RAN) operators.
Aspects described herein overcome the aforementioned technical problem(s) by providing signaling to configure certain entities in a cloud-based RAN, such as a virtual RAN (V-RAN) or open RAN (O-RAN), for a cross-node AI/ML session between a UE and a RAN controller. Certain entities (e.g., a RAN controller, E2 node, or a service management and orchestration (SMO) entity) in the cloud-based RAN may indicate entity-specific support or capability for a cross-node AI/ML session between the UE and the RAN controller to one or more other entities in the cloud-based RAN. For example, a cloud platform serving a cross-node AI/ML application may provide, to a RAN controller, the AI/ML functions or models supported by the cloud platform for cross-node AI/ML operations. Certain entities (e.g., a RAN controller or E2 node) in the cloud-based RAN may configure a UE for the cross-node AI/ML session. For example, a RAN controller may output, to a base station (e.g., a disaggregated base station), an indication of a cross-node AI/ML configuration for the UE.
The techniques to configure certain entities in a cloud-based RAN for a cross-node AI/ML session as described herein may provide any of various beneficial effects and/or advantages. Cross-node AI/ML operation or session between a UE and a RAN controller in a cloud-based RAN for joint inference implementations may allow the RAN-side AI/ML processing to be performed efficiently (e.g., reduced processing latencies, dynamic load balancing, resource sharing, etc.) and/or distributed across a cloud platform (which may facilitate the reduced processing latencies, dynamic load balancing, resource sharing, etc.), such as an RIC. In some cases, the cross-node AI/ML session may allow RAN-side AI/ML processing to be performed at a specialized AI/ML computing device, such as a cloud server having one or more neural network processors, one or more graphical processing units, or any suitable AI/ML specific processor. The specialized AI/ML computing device may have the capability to perform AI/ML computations more efficiently compared to a general purpose processor, such as a microprocessor, which may be employed at a base station or an entity associated with a disaggregated base station. Thus, the configuration of the cross-node AI/ML session described herein may facilitate improved wireless communication performance, including, for example, increased throughput, decreased latency, increased network capacity, spectral efficiencies, etc., due to the efficient RAN-side AI/ML processing enabled by the cross-node AI/ML session and/or offloading of RAN-side AI/ML processing to a RAN controller from a base station and/or entity associated with a disaggregated base station.
The techniques and methods described herein may be used for various wireless communications networks. While aspects may be described herein using terminology commonly associated with 3G, 4G, 5G, 6G, and/or other generations of wireless technologies, aspects of the present disclosure may likewise be applicable to other communications systems and standards not explicitly mentioned herein.
Generally, wireless communications network 100 includes various network entities (alternatively, network elements or network nodes). A network entity is generally a communications device and/or a communications function performed by a communications device (e.g., a user equipment (UE), a base station (BS), a component of a BS, a server, etc.). As such communications devices are part of wireless communications network 100, and facilitate wireless communications, such communications devices may be referred to as wireless communications devices. For example, various functions of a network as well as various devices associated with and interacting with a network may be considered network entities. Further, wireless communications network 100 includes terrestrial aspects, such as ground-based network entities (e.g., BSs 102), and non-terrestrial aspects (also referred to herein as non-terrestrial network entities), such as satellite 140 and aircraft 145, which may include network entities on-board (e.g., one or more BSs) capable of communicating with other network elements (e.g., terrestrial BSs) and UEs.
In the depicted example, wireless communications network 100 includes BSs 102, UEs 104, and one or more core networks, such as an Evolved Packet Core (EPC) 160 and 5G Core (5GC) network 190, which interoperate to provide communications services over various communications links, including wired and wireless links.
BSs 102 wirelessly communicate with (e.g., transmit signals to or receive signals from) UEs 104 via communications links 120. The communications links 120 between BSs 102 and UEs 104 may include uplink (UL) (also referred to as reverse link) transmissions from a UE 104 to a BS 102 and/or downlink (DL) (also referred to as forward link) transmissions from a BS 102 to a UE 104. The communications links 120 may use multiple-input and multiple-output (MIMO) antenna technology, including spatial multiplexing, beamforming, and/or transmit diversity in various aspects.
BSs 102 may generally include: a NodeB, enhanced NodeB (eNB), next generation enhanced NodeB (ng-eNB), next generation NodeB (gNB or gNodeB), access point, base transceiver station, radio base station, radio transceiver, transceiver function, transmission reception point, and/or others. Each of BSs 102 may provide communications coverage for a respective coverage area 110, which may sometimes be referred to as a cell, and which may overlap in some cases (e.g., small cell 102′ may have a coverage area 110′ that overlaps the coverage area 110 of a macro cell). A BS may, for example, provide communications coverage for a macro cell (covering relatively large geographic area), a pico cell (covering relatively smaller geographic area, such as a sports stadium), a femto cell (relatively smaller geographic area (e.g., a home)), and/or other types of cells.
Generally, a cell may refer to a portion, partition, or segment of wireless communication coverage served by a network entity within a wireless communication network. A cell may have geographic characteristics, such as a geographic coverage area, as well as radio frequency characteristics, such as time and/or frequency resources dedicated to the cell. For example, a specific geographic coverage area may be covered by multiple cells employing different frequency resources (e.g., bandwidth parts) and/or different time resources. As another example, a specific geographic coverage area may be covered by a single cell. In some contexts (e.g., a carrier aggregation scenario and/or multi-connectivity scenario), the terms “cell” or “serving cell” may refer to or correspond to a specific carrier frequency (e.g., a component carrier) used for wireless communications, and a “cell group” may refer to or correspond to multiple carriers used for wireless communications. As examples, in a carrier aggregation scenario, a UE may communicate on multiple component carriers corresponding to multiple (serving) cells in the same cell group, and in a multi-connectivity (e.g., dual connectivity) scenario, a UE may communicate on multiple component carriers corresponding to multiple cell groups.
While BSs 102 are depicted in various aspects as unitary communications devices, BSs 102 may be implemented in various configurations. For example, one or more components of a base station may be disaggregated, including a central unit (CU), one or more distributed units (DUs), one or more radio units (RUs), a Near-Real Time (Near-RT) RAN Intelligent Controller (RIC), or a Non-Real Time (Non-RT) RIC, to name a few examples. In another example, various aspects of a base station may be virtualized. More generally, a base station (e.g., BS 102) may include components that are located at a single physical location or components located at various physical locations. In examples in which a base station includes components that are located at various physical locations, the various components may each perform functions such that, collectively, the various components achieve functionality that is similar to a base station that is located at a single physical location. In some aspects, a base station including components that are located at various physical locations may be referred to as a disaggregated radio access network architecture, such as an Open RAN (O-RAN) or Virtualized RAN (VRAN) architecture.
Different BSs 102 within wireless communications network 100 may also be configured to support different radio access technologies, such as 3G, 4G, and/or 5G. For example, BSs 102 configured for 4G LTE (collectively referred to as Evolved Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access Network (E-UTRAN)) may interface with the EPC 160 through first backhaul links 132 (e.g., an S1 interface). BSs 102 configured for 5G (e.g., 5G NR or Next Generation RAN (NG-RAN)) may interface with 5GC 190 through second backhaul links 184. BSs 102 may communicate directly or indirectly (e.g., through the EPC 160 or 5GC 190) with each other over third backhaul links 134 (e.g., X2 interface), which may be wired or wireless.
Wireless communications network 100 may subdivide the electromagnetic spectrum into various classes, bands, channels, or other features. In some aspects, the subdivision is provided based on wavelength and frequency, where frequency may also be referred to as a carrier, a subcarrier, a frequency channel, a tone, or a subband. For example, 3GPP currently defines Frequency Range 1 (FR1) as including 410 MHz-7125 MHz, which is often referred to (interchangeably) as “Sub-6 GHz”. Similarly, 3GPP currently defines Frequency Range 2 (FR2) as including 24,250 MHz-71,000 MHz, which is sometimes referred to (interchangeably) as a “millimeter wave” (“mmW” or “mmWave”). In some cases, FR2 may be further defined in terms of sub-ranges, such as a first sub-range FR2-1 including 24,250 MHz-52,600 MHz and a second sub-range FR2-2 including 52,600 MHz-71,000 MHz. A base station configured to communicate using mm Wave/near mm Wave radio frequency bands (e.g., a mmWave base station such as BS 180) may utilize beamforming (e.g., 182) with a UE (e.g., 104) to improve path loss and range.
The communications links 120 between BSs 102 and, for example, UEs 104, may be through one or more carriers, which may have different bandwidths (e.g., 5, 10, 15, 20, 100, 400, and/or other MHz), and which may be aggregated in various aspects. Carriers may or may not be adjacent to each other. Allocation of carriers may be asymmetric with respect to DL and UL (e.g., more or fewer carriers may be allocated for DL than for UL).
Communications using higher frequency bands may have higher path loss and a shorter range compared to lower frequency communications. Accordingly, certain base stations (e.g., 180 in
Wireless communications network 100 further includes a Wi-Fi AP 150 in communication with Wi-Fi stations (STAs) 152 via communications links 154 in, for example, a 2.4 GHz and/or 5 GHz unlicensed frequency spectrum.
Certain UEs 104 may communicate with each other using device-to-device (D2D) communications link 158. D2D communications link 158 may use one or more sidelink channels, such as a physical sidelink broadcast channel (PSBCH), a physical sidelink discovery channel (PSDCH), a physical sidelink shared channel (PSSCH), a physical sidelink control channel (PSCCH), and/or a physical sidelink feedback channel (PSFCH).
EPC 160 may include various functional components, including: a Mobility Management Entity (MME) 162, other MMEs 164, a Serving Gateway 166, a Multimedia Broadcast Multicast Service (MBMS) Gateway 168, a Broadcast Multicast Service Center (BM-SC) 170, and/or a Packet Data Network (PDN) Gateway 172, such as in the depicted example. MME 162 may be in communication with a Home Subscriber Server (HSS) 174. MME 162 is the control node that processes the signaling between the UEs 104 and the EPC 160. Generally, MME 162 provides bearer and connection management.
Generally, user plane Internet protocol (IP) packets are transferred through Serving Gateway 166, which itself is connected to PDN Gateway 172. PDN Gateway 172 provides UE IP address allocation as well as other functions. PDN Gateway 172 and the BM-SC 170 are connected to IP Services 176, which may include, for example, the Internet, an intranet, an IP Multimedia Subsystem (IMS), a Packet Switched (PS) streaming service, and/or other IP services.
BM-SC 170 may provide functions for MBMS user service provisioning and delivery. BM-SC 170 may serve as an entry point for content provider MBMS transmission, may be used to authorize and initiate MBMS Bearer Services within a public land mobile network (PLMN), and/or may be used to schedule MBMS transmissions. MBMS Gateway 168 may be used to distribute MBMS traffic to the BSs 102 belonging to a Multicast Broadcast Single Frequency Network (MBSFN) area broadcasting a particular service, and/or may be responsible for session management (start/stop) and for collecting eMBMS related charging information.
5GC 190 may include various functional components, including: an Access and Mobility Management Function (AMF) 192, other AMFs 193, a Session Management Function (SMF) 194, and a User Plane Function (UPF) 195. AMF 192 may be in communication with Unified Data Management (UDM) 196.
AMF 192 is a control node that processes signaling between UEs 104 and 5GC 190. AMF 192 provides, for example, quality of service (QOS) flow and session management.
Internet protocol (IP) packets are transferred through UPF 195, which is connected to the IP Services 197, and which provides UE IP address allocation as well as other functions for 5GC 190. IP Services 197 may include, for example, the Internet, an intranet, an IMS, a PS streaming service, and/or other IP services.
In various aspects, a network entity or network node can be implemented as an aggregated base station, as a disaggregated base station, a component of a base station, an integrated access and backhaul (IAB) node, a relay node, a sidelink node, to name a few examples.
Each of the units, e.g., the CUs 210, the DUs 230, the RUs 240, as well as the Near-RT RICs 225, the Non-RT RICs 215 and the SMO Framework 205, may include one or more interfaces or be coupled to one or more interfaces configured to receive or transmit signals, data, or information (collectively, signals) via a wired or wireless transmission medium. Each of the units, or an associated processor or controller providing instructions to the communications interfaces of the units, can be configured to communicate with one or more of the other units via the transmission medium. For example, the units can include a wired interface configured to receive or transmit signals over a wired transmission medium to one or more of the other units. Additionally or alternatively, the units can include a wireless interface, which may include a receiver, a transmitter or transceiver (such as a radio frequency (RF) transceiver), configured to receive or transmit signals, or both, over a wireless transmission medium to one or more of the other units.
In some aspects, the CU 210 may host one or more higher layer control functions. Such control functions can include radio resource control (RRC), packet data convergence protocol (PDCP), service data adaptation protocol (SDAP), or the like. Each control function can be implemented with an interface configured to communicate signals with other control functions hosted by the CU 210. The CU 210 may be configured to handle user plane functionality (e.g., Central Unit-User Plane (CU-UP)), control plane functionality (e.g., Central Unit-Control Plane (CU-CP)), or a combination thereof. In some implementations, the CU 210 can be logically split into one or more CU-UP units and one or more CU-CP units. The CU-UP unit can communicate bidirectionally with the CU-CP unit via an interface, such as the E1 interface when implemented in an O-RAN configuration. The CU 210 can be implemented to communicate with the DU 230, as necessary, for network control and signaling.
The DU 230 may correspond to a logical unit that includes one or more base station functions to control the operation of one or more RUs 240. In some aspects, the DU 230 may host one or more of a radio link control (RLC) layer, a medium access control (MAC) layer, and one or more high physical (PHY) layers (such as modules for forward error correction (FEC) encoding and decoding, scrambling, modulation and demodulation, or the like) depending, at least in part, on a functional split, such as those defined by the 3rd Generation Partnership Project (3GPP). In some aspects, the DU 230 may further host one or more low PHY layers. Each layer (or module) can be implemented with an interface configured to communicate signals with other layers (and modules) hosted by the DU 230, or with the control functions hosted by the CU 210.
Lower-layer functionality can be implemented by one or more RUs 240. In some deployments, an RU 240, controlled by a DU 230, may correspond to a logical node that hosts RF processing functions, or low-PHY layer functions (such as performing fast Fourier transform (FFT), inverse FFT (iFFT), digital beamforming, physical random access channel (PRACH) extraction and filtering, or the like), or both, based at least in part on the functional split, such as a lower layer functional split. In such an architecture, the RU(s) 240 can be implemented to handle over the air (OTA) communications with one or more UEs 104. In some implementations, real-time and non-real-time aspects of control and user plane communications with the RU(s) 240 can be controlled by the corresponding DU 230. In some scenarios, this configuration can enable the DU(s) 230 and the CU 210 to be implemented in a cloud-based RAN architecture, such as a vRAN architecture.
The SMO Framework 205 may be configured to support RAN deployment and provisioning of non-virtualized and virtualized network elements. For non-virtualized network elements, the SMO Framework 205 may be configured to support the deployment of dedicated physical resources for RAN coverage requirements which may be managed via an operations and maintenance interface (such as an O1 interface). For virtualized network elements, the SMO Framework 205 may be configured to interact with a cloud computing platform (such as an open cloud (O-Cloud) 290) to perform network element life cycle management (such as to instantiate virtualized network elements) via a cloud computing platform interface (such as an O2 interface). Such virtualized network elements can include, but are not limited to, CUs 210, DUs 230, RUs 240 and Near-RT RICs 225. In some implementations, the SMO Framework 205 can communicate with a hardware aspect of a 4G RAN, such as an open eNB (O-eNB) 211, via an O1 interface. Additionally, in some implementations, the SMO Framework 205 can communicate directly with one or more DUs 230 and/or one or more RUs 240 via an O1 interface. The SMO Framework 205 also may include a Non-RT RIC 215 configured to support functionality of the SMO Framework 205.
The Non-RT RIC 215 may be configured to include a logical function that enables non-real-time (e.g., greater than 1 s) control and optimization of RAN elements and resources, Artificial Intelligence/Machine Learning (AI/ML) workflows including model training and updates, or policy-based guidance of applications/features in the Near-RT RIC 225. The Non-RT RIC 215 may be coupled to or communicate with (such as via an A1 interface) the Near-RT RIC 225. The Near-RT RIC 225 may be configured to include a logical function that enables near-real-time control (e.g., in the order of 10 ms-1 s) and optimization of RAN elements and resources via data collection and actions over an interface (such as via an E2 interface) connecting one or more CUs 210, one or more DUs 230, or both, as well as an O-eNB, with the Near-RT RIC 225.
In some implementations, to generate AI/ML functions or models to be deployed in the Near-RT RIC 225, the Non-RT RIC 215 may receive parameters or external enrichment information from external servers. Such information may be utilized by the Near-RT RIC 225 and may be received at the SMO Framework 205 or the Non-RT RIC 215 from non-network data sources or from network functions. In some examples, the Non-RT RIC 215 or the Near-RT RIC 225 may be configured to tune RAN behavior or performance. For example, the Non-RT RIC 215 may monitor long-term trends and patterns for performance and employ AI/ML functions or models to perform corrective actions through the SMO Framework 205 (such as reconfiguration via O1) or via creation of RAN management policies (such as A1 policies).
Generally, BS 102 includes various processors (e.g., 318, 320, 330, 338, and 340), antennas 334a-t (collectively 334), transceivers 332a-t (collectively 332), which include modulators and demodulators, and other aspects, which enable wireless transmission of data (e.g., data source 312) and wireless reception of data (e.g., data sink 314). For example, BS 102 may send and receive data between BS 102 and UE 104. BS 102 includes controller/processor 340, which may be configured to implement various functions described herein related to wireless communications.
Generally, UE 104 includes various processors (e.g., 358, 364, 366, 370, and 380), antennas 352a-r (collectively 352), transceivers 354a-r (collectively 354), which include modulators and demodulators, and other aspects, which enable wireless transmission of data (e.g., retrieved from data source 362) and wireless reception of data (e.g., provided to data sink 360). UE 104 includes controller/processor 380, which may be configured to implement various functions described herein related to wireless communications.
In regards to an example downlink transmission, BS 102 includes a transmit processor 320 that may receive data from a data source 312 and control information from a controller/processor 340. The control information may be for the physical broadcast channel (PBCH), physical control format indicator channel (PCFICH), physical hybrid automatic repeat request (HARQ) indicator channel (PHICH), physical downlink control channel (PDCCH), group common PDCCH (GC PDCCH), and/or others. The data may be for the physical downlink shared channel (PDSCH), in some examples.
Transmit processor 320 may process (e.g., encode and symbol map) the data and control information to obtain data symbols and control symbols, respectively. Transmit processor 320 may also generate reference symbols, such as for the primary synchronization signal (PSS), secondary synchronization signal (SSS), PBCH demodulation reference signal (DMRS), and channel state information reference signal (CSI-RS).
Transmit (TX) multiple-input multiple-output (MIMO) processor 330 may perform spatial processing (e.g., precoding) on the data symbols, the control symbols, and/or the reference symbols, if applicable, and may provide output symbol streams to the modulators (MODs) in transceivers 332a-332t. Each modulator in transceivers 332a-332t may process a respective output symbol stream to obtain an output sample stream. Each modulator may further process (e.g., convert to analog, amplify, filter, and upconvert) the output sample stream to obtain a downlink signal. Downlink signals from the modulators in transceivers 332a-332t may be transmitted via the antennas 334a-334t, respectively.
In order to receive the downlink transmission, UE 104 includes antennas 352a-352r that may receive the downlink signals from the BS 102 and may provide received signals to the demodulators (DEMODs) in transceivers 354a-354r, respectively. Each demodulator in transceivers 354a-354r may condition (e.g., filter, amplify, downconvert, and digitize) a respective received signal to obtain input samples. Each demodulator may further process the input samples to obtain received symbols.
RX MIMO detector 356 may obtain received symbols from all the demodulators in transceivers 354a-354r, perform MIMO detection on the received symbols if applicable, and provide detected symbols. Receive processor 358 may process (e.g., demodulate, deinterleave, and decode) the detected symbols, provide decoded data for the UE 104 to a data sink 360, and provide decoded control information to a controller/processor 380.
In regards to an example uplink transmission, UE 104 further includes a transmit processor 364 that may receive and process data (e.g., for the PUSCH) from a data source 362 and control information (e.g., for the physical uplink control channel (PUCCH)) from the controller/processor 380. Transmit processor 364 may also generate reference symbols for a reference signal (e.g., for the sounding reference signal (SRS)). The symbols from the transmit processor 364 may be precoded by a TX MIMO processor 366 if applicable, further processed by the modulators in transceivers 354a-354r (e.g., for SC-FDM), and transmitted to BS 102.
At BS 102, the uplink signals from UE 104 may be received by antennas 334a-t, processed by the demodulators in transceivers 332a-332t, detected by a RX MIMO detector 336 if applicable, and further processed by a receive processor 338 to obtain decoded data and control information sent by UE 104. Receive processor 338 may provide the decoded data to a data sink 314 and the decoded control information to the controller/processor 340.
Memories 342 and 382 may store data and program codes for BS 102 and UE 104, respectively.
Scheduler 344 may schedule UEs for data transmission on the downlink and/or uplink.
In various aspects, BS 102 may be described as transmitting and receiving various types of data associated with the methods described herein. In these contexts, “transmitting” may refer to various mechanisms of outputting data, such as outputting data from data source 312, scheduler 344, memory 342, transmit processor 320, controller/processor 340, TX MIMO processor 330, transceivers 332a-t, antenna 334a-t, and/or other aspects described herein. Similarly, “receiving” may refer to various mechanisms of obtaining data, such as obtaining data from antennas 334a-t, transceivers 332a-t, RX MIMO detector 336, controller/processor 340, receive processor 338, scheduler 344, memory 342, and/or other aspects described herein.
In various aspects, UE 104 may likewise be described as transmitting and receiving various types of data associated with the methods described herein. In these contexts, “transmitting” may refer to various mechanisms of outputting data, such as outputting data from data source 362, memory 382, transmit processor 364, controller/processor 380, TX MIMO processor 366, transceivers 354a-t, antenna 352a-t, and/or other aspects described herein. Similarly, “receiving” may refer to various mechanisms of obtaining data, such as obtaining data from antennas 352a-t, transceivers 354a-t, RX MIMO detector 356, controller/processor 380, receive processor 358, memory 382, and/or other aspects described herein.
In some aspects, a processor may be configured to perform various operations, such as those associated with the methods described herein, and transmit (output) to or receive (obtain) data from another interface that is configured to transmit or receive, respectively, the data.
In various aspects, artificial intelligence (AI) processors 318 and 370 may perform AI processing for BS 102 and/or UE 104, respectively, such as neural network processing, deep learning, tensor processing, etc. The AI processor 318 may include AI accelerator hardware or circuitry such as one or more neural processing units (NPUs), one or more neural network processors, one or more tensor processors, one or more deep learning processors, etc. The AI processor 370 may likewise include AI accelerator hardware or circuitry. As an example, the AI processor 370 may perform AI-based beam management, AI-based channel state feedback (CSF), AI-based antenna tuning, and/or AI-based positioning (e.g., non-line of sight positioning). In some cases, the AI processor 318 may process feedback from the UE 104 (e.g., CSF) using hardware accelerated AI inferences and/or AI training. The AI processor 318 may decode compressed CSF from the UE 104, for example, using a hardware accelerated AI inference associated with the CSF. In certain cases, the AI processor 318 may perform certain RAN-based functions including, for example, network planning, network performance management, energy-efficient network operations, etc.
In particular,
Wireless communications systems may utilize orthogonal frequency division multiplexing (OFDM) with a cyclic prefix (CP) on the uplink and downlink. Such systems may also support half-duplex operation using time division duplexing (TDD). OFDM and single-carrier frequency division multiplexing (SC-FDM) partition the system bandwidth (e.g., as depicted in
A wireless communications frame structure may be frequency division duplex (FDD), in which, for a particular set of subcarriers, subframes within the set of subcarriers are dedicated for either DL or UL. Wireless communications frame structures may also be time division duplex (TDD), in which, for a particular set of subcarriers, subframes within the set of subcarriers are dedicated for both DL and UL.
In
In certain aspects, the number of slots within a subframe (e.g., a slot duration in a subframe) is based on a numerology, which may define a frequency domain subcarrier spacing and symbol duration as further described herein. In certain aspects, given a numerology u, there are 24 slots per subframe. Thus, numerologies (μ) 0 to 6 may allow for 1, 2, 4, 8, 16, 32, and 64 slots, respectively, per subframe. In some cases, the extended CP (e.g., 12 symbols per slot) may be used with a specific numerology, e.g., numerology 2 allowing for 4 slots per subframe. The subcarrier spacing and symbol length/duration are a function of the numerology. The subcarrier spacing may be equal to 2μ×15 kHz, where u is the numerology 0 to 6. As an example, the numerology u=0 corresponds to a subcarrier spacing of 15 kHz, and the numerology μ=6 corresponds to a subcarrier spacing of 960 kHz. The symbol length/duration is inversely related to the subcarrier spacing.
As depicted in
As illustrated in
A primary synchronization signal (PSS) may be within symbol 2 of particular subframes of a frame. The PSS is used by a UE (e.g., 104 of
A secondary synchronization signal (SSS) may be within symbol 4 of particular subframes of a frame. The SSS is used by a UE to determine a physical layer cell identity group number and radio frame timing.
Based on the physical layer identity and the physical layer cell identity group number, the UE can determine a physical cell identifier (PCI). Based on the PCI, the UE can determine the locations of the aforementioned DMRS. The physical broadcast channel (PBCH), which carries a master information block (MIB), may be logically grouped with the PSS and SSS to form a synchronization signal (SS)/PBCH block. The MIB provides a number of RBs in the system bandwidth and a system frame number (SFN). The physical downlink shared channel (PDSCH) carries user data, broadcast system information not transmitted through the PBCH such as system information blocks (SIBs), and/or paging messages.
As illustrated in
In certain cases, a wireless communications system (e.g., a WWAN including, for example, a 5G New Radio system and/or any future wireless communications system) may employ AI/ML to perform any of various wireless communication operations, such as channel state information estimation and feedback, beam management, device positioning, etc. As an example, an AI/ML model (e.g., a joint inference model used at the UE) may allow the UE to estimate the channel conditions of a particular communication link (e.g., one or more beams and/or frequency bands) based on measurements associated with a different communication link (e.g., different beams and/or frequency bands). The AI/ML model may allow a UE to predict the channel conditions associated with one or more narrow beams based on channel measurements associated with one or more wide beams. In certain aspects, a joint inference may be used at a UE and a network entity in a RAN. In such cases, the network entity (e.g., a base station) may perform certain AI/ML computations based at least in part on AI/ML input obtained from the UE (e.g., decoding or decompression of AI/ML-based feedback or input from the UE).
As an example, an AI/ML-based channel state information feedback (CSF) encoder may be deployed at the UE to provide compressed CSI (which may be readable by an AI/ML model) to the RAN, and an AI/ML-based CSF decoder may be deployed at the network entity to decompress the CSF and use the CSF for channel scheduling and/or configuration of a communication link with the UE and/or other UEs. In some cases, the AI/ML model may be used to predict or infer the channel conditions associated with the communication link between the UE and the network entity. The AI/ML-based channel conditions may be used to determine any of various wireless communication parameters associated with the communication link, such as a frequency band, subcarrier spacing, channel bandwidth, bandwidth part, time division duplex pattern, modulation and coding scheme (MCS), code rate, carrier aggregation, etc. In some cases, a partial inference may be performed at the UE, and then the remaining inference may be performed at the RAN, and/or vice versa. The UE may receive AI/ML specific control or input from the RAN, and/or vice versa. However, the AI/ML processing performed at the network entity, such as a base station and/or certain disaggregated entities thereof (e.g., a CU and/or DU), may be computationally intensive.
The AI/ML processing may use computational resources (e.g., processing and/or storage) that could be used for other operations (e.g., scheduling and/or managing wireless communications), especially when a base station is tasked with managing the communication links for multiple UEs and/or multiple ML functions or models for one or more UEs. In some cases, the AI/ML processing may consume the processing capabilities of the base station to perform certain network functions, such as scheduling and/or wireless communications management, within a particular performance specification (e.g., a specified latency), or vice versa. In certain cases, deploying additional computational resources to base stations for AI/ML processing may be a costly endeavor for radio access network (RAN) operators.
Certain aspects of the present disclosure provide signaling to configure certain entities (e.g., a UE, an E2 node, a RAN controller, etc.) in a cloud-based RAN architecture for a cross-node AI/ML session between a UE and RAN controller.
Generally, a cross-node AI/ML session between a UE and a network entity may refer to a scenario where a UE and a network entity perform AI/ML operations, for example, using a shared AI/ML function or model for predicting, inferring, encoding, and/or decoding certain information associated with a wireless communication link, such as channel characteristics, device positioning, and/or beam management. In certain cases, a cross-node AI/ML session may include a UE using an AI/ML model to predict, infer, encode, and/or decode the information associated with the wireless communication link, and the cross-node AI/ML session may further include the network entity monitoring the performance of the AI/ML model deployed at the UE and performing certain lifecycle management tasks associated with the AI/ML model. In some cases, the UE may send, to the network entity, AI/ML input(s) (e.g., measurements associated with channel conditions) and/or AI/ML output(s) (e.g., channel state feedback) for processing or monitoring at the network entity, or vice versa. As further described herein, the AI/ML processing at the RAN for a joint inference associated with a UE-network entity pair (e.g., a cross-node) may be offloaded to a separate computing device, such as a RIC in a cloud-based RAN architecture (e.g., V-RAN and/or O-RAN), independent from a base station and/or certain disaggregated network entities associated with a base station, such as an E2 node.
An E2 node may include any physical or logical RAN node having a terminating E2 interface including, for example, a CU (for control plane and/or user plane traffic) and/or a DU. A cloud-based RAN may use a cloud computing environment to facilitate interoperable interfaces, RAN virtualization, and/or AI/ML operations. A cloud-based RAN may use a shared pool of configurable computing resources (e.g., networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) to perform certain network functions, for example, as described herein with respect to
Certain entities in the cloud-based RAN may indicate entity-specific support or capability for a cross-node AI/ML session and/or operation to one or more other entities in the cloud-based RAN, for example, as described herein with respect to
Certain entities (e.g., a near-RT RIC or E2 node) in the cloud-based RAN may configure a UE for a cross-node AI/ML session between the UE and a RAN controller, for example, as described herein with respect to
Referring to
The RAN-side AI/ML operation(s) associated with the cross-node AI/ML session may be performed at the Near-RT RIC 525 via a cloud platform 562 running the xApp 560, which may be or include an AI/ML-specific application. The RAN-side AI/ML operations may be offloaded from the CU 510 and/or DU 530 to the xApp 560 via the cross-node AI/ML session between the UE 504 and the Near-RT RIC 525, allowing the CU 510 and/or the DU 530 to perform other networking operations, such as scheduling and/or managing communication links (e.g., updating communication link settings) with one or more UEs.
In some cases, the cross-node AI/ML session between the UE 504 and the Near-RT RIC 525 may be used to perform AI/ML assisted CSI encoding/decoding, beam management, and/or device positioning. As an example, a CSF decoder may be deployed at the xApp 560 running at the Near-RT RIC 525 and/or the cloud platform 562 associated with the Near-RT RIC 525. The Near-RT RIC 525 and/or the cloud platform 562 may be collocated with the DU 530 and/or CU 510, for example. The xApp 560 may provide decompressed channel state information to the DU 530 and/or CU 510, which may perform scheduling functions, for example, based on the decompressed channel state information. The cross-node AI/ML session may allow secure AI/ML functions or models to be implemented at the encoder/decoder, for example.
In certain aspects, a cloud-based RAN controller (e.g., a Near-RT RIC) and/or E2 node may obtain capability information associated with a UE. The UE capability information may facilitate the RAN controller and/or E2 node to determine a cross-node AI/ML configuration for the UE. For example, the UE capability information may indicate one or more cross-node AI/ML capabilities associated with the UE, including, for example, an AI/ML function name or identifier, a module structure, an AI/ML feature, and/or an AI/ML feature group. The UE may indicate to the RAN which AI/ML features and corresponding models are supported by the UE.
In certain cases, the RAN may manage the UE AI/ML operations at a feature level, such as a CSI feedback feature, a beam management feature, a device positioning feature, etc. In such cases, the UE AI/ML capability information may include a list of one or more AI/ML feature names, for example, ml-CSIFeedback, ml-beamManagement, ml-Positioning, etc.
In some cases, the RAN may manage the AI/ML models associated with a feature (e.g., device positioning) used at a UE. In such cases, the UE AI/ML capability information may include a list of one or more AI/ML feature names, a list of one or more model identifiers supported per AI/ML feature name, and/or one or more indications that one or more specific models are loaded at the UE (e.g., model load state flag(s)).
In certain cases, the RAN may manage the model structure associated with an AI/ML model used at a UE. For example, the RAN may configure a specific model structure (e.g., indicating a model structure (MS) identifier (ID)) and/or a parameter set (PS) for a feature (e.g., beam management) used at the UE for one or more AI/ML models. As an example, the model structure may identify an architecture associated with a particular AI/ML model, such as decision tree, deep neural network, feedforward neural networks, convolutional neural networks, and transformers. In such cases, the UE AI/ML capability information may include a list of one or more AI/ML feature names and a list of one or more MS IDs supported per AI/ML feature name. In certain aspects, the PS values may not be expected to depend on UE capabilities, and thus, PS information may not be part of the UE capability information.
In certain aspects, the xApp may perform a registration procedure with the RIC (e.g., the Near-RT RIC). For example,
At 602, the xApp 560 sends a registration request to a RIC, such as the Near-RT RIC 225. During the xApp registration, the xApp may provide cross-node AI/ML information, including, for example, the RIC supported RAN function(s) and one or more decoders for UE-side models. The cross-node AI/ML information may include one or more AI/ML functions (e.g., CSF, beam management, and/or positioning), AI/ML features or feature groups (e.g., certain features associated with a function), AI/ML models (e.g., logical AI/ML models), AI/ML model structures (MSs), etc. supported for a cross-node AI/ML session between a UE and the xApp 560. In some cases, cross-node AI/ML information may include the machine learning function name(s) (MLFN), feature(s), and/or feature groups associated with the RAN-side AI/ML processing. The UE-side decoders may be indicated via a list of supported UE-side models and/or MS identifiers (IDs).
At 604, the Near-RT RIC 225 may send, to the xApp 560, a registration response to confirm or acknowledge the registration all or some of the features supported by the xApp 560. For xApp configuration updates, an SMO module of the cloud-based RAN may configure the xApp with updated cross-node AI/ML information (e.g., a new MLFN and/or new models or MSs per MLFN).
In certain aspects, the RIC may obtain UE capability information. For example, the RIC may determine various features associated with the cross-node AI/ML session based on the UE capability information as further described herein.
At 704, in response to the UE capability enquiry, the UE(s) 104 sends, to the E2 node 568, the corresponding UE capability information. The UE capability information may indicate the AI/ML features or functions that the UE is capable of performing. For example, the UE capability information may include the MLFNs supported by the UE, AI/ML features or feature groups supported by the UE, and/or the MSs supported by the UE.
At 706, the E2 node 568 sends, to the Near-RT RIC 225, the UE capability information associated with a particular UE. For example, the E2 node 568 may provide the UE capability information via a RIC subscription procedure as further described herein with respect to
At 708, the Near-RT RIC 225 stores the UE capability information associated with a particular UE in a database, such as a UE network information base (UE-NIB). The UE-NIB may store information in the UE context including, for example, the UE Capability information. In the UE-NIB, the UE capability information for a given UE may be mapped to a UE identifier associated with the UE. The UE-NIB may allow the Near-RT RIC 225 to perform UE-specific control. For example, the Near-RT RIC 225 may provide a UE-specific configuration and/or instructions for cross-node AI/ML session. In some cases, the Near-RT RIC 225 may host the UE-NIB. In certain cases, UE-NIB may be accessible to the Near-RT RIC 225 and/or other entities in the cloud-based RAN, such as an xApp.
At 710, the xApp 560 obtains the UE capability information from the Near-RT RIC 225, for example, via the UE-NIB. In some cases, the xApp 560 may obtain the UE capability information via a fetch data procedure, where the xApp 560 may request data for which the xApp is authorized from the shared data layer (SDL) for local processing.
In certain cases, the xApp 560 may obtain the UE capability information via a subscribe-notify procedure followed by the fetch data procedure. The subscribe-notify procedure may involve the xApp subscribing to the SDL for notification of authorized data changes in the database (e.g., the UE-NIB), such as changes or updates to the UE-NIB. For example, the SDL may notify the xApp of a change to the UE-NIB, and then in response to such a notification, the xApp may perform a fetch data procedure to retrieve the UE capability information indicated as being updated or added to the UE-NIB. In some cases, the xApp 560 may obtain the UE capability information via a subscribe-push procedure, where the xApp 560 may subscribe to the SDL for authorized data changes in the database, and the SDL may send, to the xApp, the type of information changes (e.g., certain meta-data) and, in the same message, the updated data (e.g., UE capability information).
Regarding
At 802, the Near-RT RIC 225 sends, to the E2 node 568, a RIC subscription request indicating certain RIC-specific information for a cross-node AI/ML session between the UE 104 and the Near-RT RIC 225. The RIC-specific information may indicate or include, for example, cross-node AI/ML support associated with an xApp. The RIC-specific information may indicate or include the MLFNs, AI/ML features, and/or AI/ML feature groups supported at the RIC. The RIC-specific information may include or indicate one or more xApp models, pairing information between UE-side models and xApp-side models, and/or a list of UE-side models supported by the xApp (e.g., available for activating at a UE) and/or currently activated at a UE.
At 804, the E2 node 568 may send, to the Near-RT RIC 225, a RIC subscription response confirming or acknowledging the RIC-specific information received at 802. The information obtained via the RIC subscription request may allow the E2 node to configure a UE for a cross-node AI/ML session between the UE 104 and the Near-RT RIC 225, relay communications between the UE 104 and the Near-RT RIC 225, and/or manage the communication link between the UE and the E2 node 568 based on instructions and/or AI/ML output data (e.g., decoded CSF) from the Near-RT RIC 225.
With respect to
At 806, the Near-RT RIC 225 obtains UE capability information associated with a particular UE corresponding to a UE identifier (e.g., UE ID1). As an example, the UE capability information may be obtained at the Near-RT RIC 225 as described herein with respect to
At 808, the Near-RT RIC 225 sends, to the E2 node 568, certain AI/ML information via a RIC control request. For example, the AI/ML information may indicate or include an instruction to configure a UE (by indicating the UE ID) for a cross-node AI/ML session, the MLFN, the AI/ML features, the AI/ML feature groups, and/or the UE-side model(s) for configuration at the UE.
At 810, the E2 node 568 may send, to the Near-RT RIC 225, a RIC control response confirming or acknowledging the AI/ML information obtained at 808. The RIC control response may indicate that the E2 node 568 has configured or will configure the UE 104 based on the configuration obtained at 808.
At 902, the Near-RT RIC 225 obtains UE capability information associated with a particular UE. The UE capability information may correspond to a UE identifier (e.g., UE ID1) associated with the UE 104. As an example, the UE capability information may be obtained at the Near-RT RIC 225 as described herein with respect to
At 904, the Near-RT RIC 225 sends, to the E2 node 568, an indication of certain cross-node AI/ML information including cross-node AI/ML features supported at the Near-RT RIC 225 and/or a configuration for a particular UE, for example, as described herein with respect to
At 906, the E2 node 568 configures the UE 104 for wireless communications, for example, via Layer 3 signaling (RRC signaling). As an example, the E2 node 568 may send, to the UE 104, an RRC configuration message or an RRC reconfiguration message indicating information to establish the communication link with the xApp, such as an xApp identifier or xApp address.
Optionally, at 908, the E2 node 568 may send, to the Near-RT RIC 225, an indication of the cross-node AI/ML configuration associated with and/or activated at the UE 104. For example, the E2 node 568 may provide an indication of the UE IDs and the AI/ML functions or models activated at the corresponding UEs. The E2 node 568 may provide such an indication where the E2 node selects the UE(s) and/or the configuration(s) for the cross-node AI/ML session between the UE 104 and the Near-RT RIC 225. The E2 node 568 may send the indication to the xApp, if the E2 node selects the UE(s) and corresponding configuration(s), for example, as described herein with respect to
At 910, the UE 104 establishes a communication link (e.g., a secure user-plane connection) with the xApp. The UE 104 may communicate with the xApp via the communication link, such as a user-plane link between the UE 104 and the xApp. The user-plane link may allow the UE 104 and the xApp to communicate certain cross-node AI/ML information (e.g., AI/ML feedback, ground truth(s), training data, model data, model structures, configuration(s), request(s), response(s), instruction(s), etc.) independent of the E2 node being aware of such information.
As an example, the xApp may send, to the UE 104, one or more pre-trained AI/ML models and/or information representative of the AI/ML model(s) (including, for example, a set of model parameters and/or hyperparameters, a model structure or model architecture, or any other structured or unstructured data describing the model in such a way that it may be implemented on a device) via the secure communication link with the xApp. In some cases, the xApp may send training data to the UE 104 via the communication link, and the UE 104 may use the training data to train or fine-tune an untrained or partially trained AI/ML model, for example, used for generating CSF at the UE 104. In certain cases, the xApp may update or reconfigure an AI/ML function or model used at the UE 104 via the communication link. In some cases, the UE 104 may send, to the xApp, AI/ML input data and/or feedback for the xApp inference or a federated model via the communication link with the xApp.
The communication link may allow a cross-node AI/ML session between the UE 104 and the xApp without the E2 node being aware of the actual model or model structure used at the UE 104 facilitating such a session. The communication link may allow the transfer of AI/ML model(s) to the UE 104 and/or the transfer of AI/ML input data to the Near-RT RIC 225 without the E2 node being aware of the actual AI/ML models or input. The communication link may facilitate a modular design in the cloud-based RAN where the Near-RT RIC 225 may configure and/or service cross-node AI/ML sessions the UE 104 and the Near-RT RIC 225. The communication link may allow for a modular design for the cross-node AI/ML session that offloads certain processing and/or certain communications at the E2 node 568 to the Near-RT RIC 225.
At 912, the UE 104 may send, to the E2 node 568, an indication that the secure connection between the UE 104 and xApp has been established.
Optionally, at 914, the Near-RT RIC 225 may send, to the E2 node 568, an indication that the secure connection between the UE 104 and xApp has been established. Such an indication may enable to the E2 node 568 to be aware of the connection between the UE 104 and xApp. The E2 node 568 may take measures to preserve the connection between the UE 104 and xApp, for example, in response to changes in channel conditions between the UE 104 and the E2 node 568, network resources (e.g., load or capacity), UE mobility, etc.
In certain aspects, the UE 104 and/or E2 node 568 may initialize the procedure to establish a cross-node AI/ML session between the UE and the Near-RT RIC 225. For example, as the UE 104 may be capable of performing AI/ML operations (e.g., AI-enhanced CSF, AI-enhanced beam management, AI-enhanced positioning, etc.), the UE 104 may request to establish a cross-node AI/ML session with the Near-RT RIC 225, for example, as described herein with respect to
Optionally, at 1002, the UE 104 may send, to the E2 node 568, a request to establish (or initiate) a cross-node AI/ML session between the UE 104 and the Near-RT RIC 225. As an example, the request may be sent via RRC signaling, such as UE assistance information (UAI). In some cases, the request may indicate or include a certain AI/ML configuration associated with the cross-node AI/ML session. For example, the requested configuration may indicate or include a session configuration to use for the cross-node AI/ML session (e.g., as indicated via identifier(s) or name(s) associated with such AI/ML settings).
At 1004, the E2 node 568 sends, to the Near-RT RIC 225, a request to establish a cross-node AI/ML session between the UE 104 and the Near-RT RIC 225. The request may be sent via a RIC query message including, for example, a RIC indication message and/or a RIC control message. The request may indicate or include a UE identifier (ID) associated with the UE requesting the cross-node AI/ML session (e.g., the UE 104), the UE requested AI/ML configuration, and/or a separate AI/ML configuration determined at the E2 node 568. The E2 node 568 may check if the Near-RT RIC can support the configuration as requested by the UE 104. In some cases, the E2 node may request the Near-RT RIC 225 to provide a cross-node AI/ML configuration and/or cross-node AI/ML features supported at the Near-RT RIC 225.
At 1006, the Near-RT RIC 225 sends, to the E2 node 568, a response to the request. The response may be sent via a RIC query response message including, for example, a RIC indication message and/or a RIC control message. The Near-RT RIC 225 may determine if the xApp can support the UE/E2 node requested configuration. In some cases, the response may indicate or include the list of MLFN, model IDs, configurations, etc. that can be supported by the Near-RT RIC 225 for a cross-node AI/ML session. The response may indicate or include a cross-node AI/ML configuration for the UE 104. For example, the response may indicate or include a session configuration (supported by the Near-RT RIC 225) to use for the cross-node AI/ML session (e.g., via identifier(s) or name(s) associated with such AI/ML settings). The E2 node 568 may configure the UE 104 for the cross-node AI/ML session between the UE 104 and the Near-RT RIC 225, as further described herein.
In certain aspects, the E2 node 568 and/or the Near-RT RIC 225 may configure the UE 104 for a cross-node AI/ML session between the UE 104 and the Near-RT RIC 225, for example, as described herein with respect to
At 1102, the Near-RT RIC 225 obtains UE capability information associated with a particular UE (e.g., UE 104), for example, as described herein with respect to
At 1104, the Near-RT RIC 225 notifies the E2 node 568 of the cross-node AI/ML features supported at the Near-RT RIC 225, for example, as described herein with respect to
Optionally, at 1106, the E2 node 568 may determine the UE configuration (e.g., a session configuration) for the cross-node AI/ML session between the UE 104 and the Near-RT RIC 225. The E2 node 568 may select any of various parameters for the cross-node-AI/ML session, such as one or more parameters for a session configuration to be used at the UE 104. The E2 node 568 may consider or take into account the cross-node AI/ML capabilities associated with the UE 104 and/or the Near-RT RIC 225.
At 1108, the E2 node 568 may send, to the UE 104, an indication of the UE configuration for the cross-node AI/ML session between the UE 104 and the Near-RT RIC 225. The UE configuration may be sent to the UE 104 via control signaling, such as Layer 1 (L1) signaling (e.g., DCI), Layer 2 (L2) signaling (e.g., MAC signaling), Layer 3 (L3) signaling (e.g., RRC signaling), and/or system information.
At 1110, the E2 node 568 may send, to the Near-RT RIC 225, an indication of the UE configuration (selected by the E2 node 568) for the cross-node AI/ML session between the UE 104 and the Near-RT RIC 225. The UE configuration may be sent to the Near-RT RIC 225, for example, via a RIC indication message and/or a RIC control message. The UE configuration may correspond to the UE 104 via a UE identifier associated with the UE 104. The UE configuration may indicate or include the UE identifier to which such configuration corresponds. In some cases, the UE identifier associated with the UE configuration may be implicitly or explicitly indicated.
Optionally, at 1112, the Near-RT RIC 225 may send, to the E2 node 568, a request to report certain information associated with the cross-node AI/ML session, such as the UE configuration for the cross-node AI/ML session, the UE status, and/or certain information associated with the communication link between the UE 104 and the E2 node. The request may be sent via a UE-specific RIC subscription message, an indication message originating from near-RT RIC, and/or a RIC control message. In some cases, the Near-RT RIC 225 may request such information to determine the state of the cross-node AI/ML session, for example, as configured and/or activated by the E2 node 568 at 1108. In certain cases, the Near-RT RIC 225 may request such information to determine the UE configuration for the cross-node, for example, to be configured and/or activated by the Near-RT RIC 225 at 1118.
The UE status may indicate or include whether the cross-node AI/ML session is configured, activated, and/or deactivated at the UE 104. In certain aspects, the UE status may indicate or include the current communication state associated with the UE 104, for example, RRC connected, RRC idle, or RRC inactive. In some cases, the Near-RT RIC 225 request certain information associated with the communication link between the UE 104 and the E2 node 568, such as the frequency range, the frequency band, the component carrier(s) (e.g., carrier aggregation and/or dual connectivity), the modulation and coding scheme (MCS), the code rate (e.g., the proportion of the data-stream that is non-redundant), the number of aggregated component carriers, the number of MIMO layers, the channel bandwidth, the subcarrier spacing, etc., associated with the communication link.
At 1114, the E2 node 568 may send, to the Near-RT RIC 225, the information requested by the Near-RT RIC 225, such as the UE state and/or the UE configuration for the cross-node AI/ML session, at 1112.
Optionally, at 1116, the Near-RT RIC 225 may determine the UE configuration for the cross-node AI/ML session. For example, the Near-RT RIC 225 may determine the UE configuration based on the information obtained at any of activities 1102, 1104, and 1114.
At 1118, the Near-RT RIC 225 may send, to the E2 node 568, an indication of the UE configuration for the cross-node AI/ML session between the UE 104 and the Near-RT RIC 225. For example, the UE configuration may indicate or include a session configuration to be used at the UE 104. The UE configuration may indicate or include the UE identifier to which such configuration corresponds. In some cases, the UE identifier associated with the UE configuration may be implicitly or explicitly indicated by the Near-RT RIC 225.
At 1120, the E2 node 568 may send, to the UE 104, an indication of the UE configuration (selected by the Near-RT RIC 225) for the cross-node AI/ML session between the UE 104 and the Near-RT RIC 225. The UE configuration may be sent to the UE 104 via control signaling, such as RRC signaling, MAC signaling, DCI, and/or system information. As described herein with respect to
In certain aspects, certain xApp functionalities associated with a cross-node AI/ML session between a UE and a Near-RT RIC may be partitioned into separate applications. For example, multiple AI/ML features may be configured and/or activated at a UE simultaneously and/or across multiple UEs. A Near-RT RIC may have an xApp (e.g., xAPP‡) for configuring a cross-node AI/ML session between a UE and a Near-RT RIC and another xApp (e.g., xAPP†) for servicing or handling a cross-node AI/ML session configured/activated at a UE. The xAPP configuring and/or handling a cross-node AI/ML session may select models (and/or other AI/ML features) based on UE capability information, UE status, communication link parameters or conditions, RAN load or capacity, etc.
As examples, the E2 node 568 may trigger a UE configuration for a cross-node AI/ML session in response to the UE/E2 node status. The xAPP† may provide, to the E2 node 568, assistance information for the UE configuration. In certain cases, the xAPP may configure the E2 node 568 to provide a UE/E2 node status report to the xAPP‡. The xAPP may provide configurations across features (including for AI/ML operation) to the E2 node 568, and the E2 node 568 may select the model (and/or other settings) for the cross-node AI/ML session between the UE and the Near-RT RIC based on the xAPP‡ provided configurations. In some cases, the xAPP† may configure the E2 node 568 to provide a UE/E2 node status report and/or a UE/E2 node requested configuration. The xAPP‡ may select the model (and/or other settings) for the cross-node AI/ML configuration based on the E2 node 568 provided status report and/or requested configuration. In certain cases, the xAPP may configure the E2 node 568 to provide a status report (e.g., UE/E2 node status and/or a UE/E2 node requested configuration) to the xAPP‡. The xAPP‡ may provide configurations across features (including for AI/ML operation) to the xAPP†, and the xAPP may select the model (and/or other settings) for the cross-node AI/ML configuration based on the xAPP‡ provided configurations.
In this example, the first xApp 560a (xAPP†) may perform certain measures to configure the cross-node AI/ML session between the UE 104 and the Near-RT RIC 225, and the second xApp 560b (e.g., xAPP‡) may perform certain measures to service or handle a cross-node AI/ML session configured/activated at the UE 104 (e.g., sending model data to the UE, processing model input from the UE, sending processed model output to the E2 node, etc.). The first xApp 560a and the second xApp 560b may be or include applications that run on the Near-RT RIC 225, and thus, any functionality described with respect to any of the first xApp 560a and the second xApp 560b may be implemented via the Near-RT RIC 225. Note that the functional partition associated with the first xApp 560a and the second xApp 560b described herein is an example. The Near-RT RIC 225 may run any number of applications (xApps) having any number of functional partitions to configure and/or handle a cross-node AI/ML session between the UE 104 and the Near-RT RIC 225.
At 1202, the Near-RT RIC 225 obtains UE capability information associated with a particular UE. The UE capability information may correspond to a UE identifier (e.g., UE ID1) associated with the UE 104. As an example, the UE capability information may be obtained at the Near-RT RIC 225 as described herein with respect to
At 1204, the first xApp 560a may send, to the E2 node 568, a configuration for reporting certain information associated with the cross-node AI/ML session to the first xApp 560a and/or the second xApp 560b. The reporting configuration may indicate or include a periodicity for reporting the information and/or one or more events that trigger the reporting. The information to be reported may include, for example, the UE/E2 node status, the UE configuration, and/or information associated with the communication link between the UE 104 and the E2 node. In certain aspects, the first xApp 560a may request the second xApp 560b to configure the E2 node 568 for the reporting. For example, at 1206a, the first xApp 560a may send, to the second xApp 560b, the reporting configuration, and at 1206b, the second xApp 560b may send, to the E2 node 568, the reporting configuration.
At 1208, the E2 node 568 may send, to the first xApp 560a and/or the second xApp 560b, the requested reporting information, for example, the UE status and/or the UE configuration, which may be or include a configuration as requested by the UE 104 and/or the E2 node 568, for example, as described herein with respect to
In certain aspects, an xApp may provide the E2 node 568 with certain information associated with the cross-node AI/ML session to allow the E2 node 568 to determine a UE configuration. The information may indicate or include a list of UE models supported at the xApp and/or pairing information between encoders and decoders. The information may indicate or include the session configuration. In some cases, the information may indicate or include a UE configuration determined based on the features or functionality supported or available at the Near-RT RIC taking into account AI/ML and/or non-AI/ML features or functions. For example, at 1210, the first xApp 560a may send, to the E2 node 568, certain information associated with the cross-node AI/ML session.
In certain case, the second xApp 560b may provide such information to the E2 node 568. For example, at 1212a, the second xApp 560b may send, to the first xApp 560a, certain information associated with the cross-node AI/ML session to notify the first xApp 560a of the information to be provided to the E2 node 568. At 1212b, the second xApp 560b may send, to the E2 node 568, the information associated with the cross-node AI/ML session.
At 1214, the E2 node 568 may determine the UE configuration for the cross-node AI/ML session between the UE 104 and the Near-RT RIC 225, for example, as described herein with respect to
At 1216, the E2 node 568 may send, to the UE 104, an indication of the UE configuration for the cross-node AI/ML session between the UE 104 and the Near-RT RIC 225. The UE configuration may be sent to the UE 104 via control signaling, such as RRC signaling, MAC signaling, DCI, and/or system information.
At 1218, the E2 node 568 may send, to the first xApp 560a via the Near-RT RIC 225, an indication of the UE configuration (selected by the E2 node 568) for the cross-node AI/ML session between the UE 104 and the Near-RT RIC 225 (e.g., the second xApp 560b). The UE configuration may be sent to the first xApp 560a, for example, via a RIC indication message and/or a RIC control message. The UE configuration may correspond to the UE 104 via a UE identifier associated with the UE 104. The UE configuration may indicate or include the UE identifier to which the configuration corresponds. In some cases, the UE identifier associated with the UE configuration may be implicitly or explicitly indicated by the first xApp 560a.
At 1220, the first xApp 560a may determine the UE configuration for the cross-node AI/ML session. For example, the first xApp 560a may determine the UE configuration based on the information obtained at any of activities 1202 and 1208.
At 1224, the first xApp 560a may send, to the E2 node 568, an indication of the UE configuration for the cross-node AI/ML session between the UE 104 and the Near-RT RIC 225. For example, the UE configuration may indicate or include a session configuration to be used at the UE 104. The UE configuration may indicate or include the UE identifier to which such configuration corresponds. In some cases, the UE identifier associated with the UE configuration may be implicitly or explicitly indicated by the first xApp 560a.
At 1222, the E2 node 568 may send, to the UE 104, an indication of the UE configuration (selected by the first xApp 560a) for the cross-node AI/ML session between the UE 104 and the Near-RT RIC 225. The UE configuration may be sent to the UE 104 via control signaling, such as L1 signaling, L2 signaling, L3 signaling, and/or system information.
In certain aspects, the UE configuration may allow the UE to communicate with the Near-RT RIC via a secure connection (e.g., a user-plane communication link), for example, as described herein with respect to
As an example,
At 1314, the first xApp 560a may send, to the E2 node 568, an indication of the UE configuration for the cross-node AI/ML session between the UE 104 and the Near-RT RIC 225.
At 1316, the E2 node 568 sends, to the UE 104, an indication of the UE configuration (selected by the first xApp 560a and/or the E2 node 568) for the cross-node AI/ML session between the UE 104 and the Near-RT RIC 225. The UE configuration may be sent to the UE 104 via control signaling, such as RRC signaling, MAC signaling, DCI, and/or system information.
At 1318, the E2 node 568 may send, to the first xApp 560a via the Near-RT RIC 225, an indication of the UE configuration (selected by the E2 node 568) for the cross-node AI/ML session between the UE 104 and the Near-RT RIC 225 (e.g., the second xApp 560b). The UE configuration may be sent to the first xApp 560a, for example, via a RIC indication message and/or a RIC control message.
At 1320, the UE 104 establishes a communication link (e.g., a secure user-plane connection) with the first xApp 560a and/or the second xApp 560b, for example, as described herein with respect to
While the examples depicted in
With respect to an E2 node, aspects of the present disclosure may be applied to any of the disaggregated network entities, including one or more CUs, one or more DUs, and/or one or more RUs, for example, as described herein with respect to
Method 1400 begins at block 1405 with providing, to a second network entity (e.g., the E2 node 568), an indication of cross-node machine learning (or AI) information (e.g., a logical AI/ML model or function associated with an AI/ML model) used for a cross-node machine learning session between the first network entity and a UE, for example, as described herein with respect to
Method 1400 then proceeds to block 1410 with obtaining machine learning information (e.g., AI/ML input data including encoded CSF) associated with the UE.
Method 1400 then proceeds to block 1415 with controlling the cross-node machine learning session based at least in part on the machine learning information. For example, the Near-RT RIC may evaluate the machine learning information and update the model used at the UE in response to the evaluation. In some cases, the Near-RT RIC may process the machine learning information to generate machine learning output, and the Near-RT RIC may provide the machine learning output to the E2 node.
In certain aspects, the cross-node machine learning information comprises one or more parameters supported by the first network entity in association with the cross-node machine learning session. The one or more parameters may include, for example, parameter(s) for a session configuration including one or more AI/ML functions (e.g., CSF, beam management, and/or positioning), AI/ML features (or feature groups) (e.g., certain features associated with a function), AI/ML models, AI/ML model structures, etc. to use for the cross-node AI/ML session (e.g., via identifier(s) or name(s) associated with such AI/ML settings).
In certain aspects, method 1400 further includes obtaining a registration request associated with an application, the registration request comprising an indication of one or more parameters supported by the application in association with the cross-node machine learning session. In certain aspects, method 1400 further includes providing, in response to the registration request, a registration response indicating the application is registered.
In certain aspects, providing the indication of the cross-node machine learning information comprises providing the indication of the cross-node machine learning information via a RIC subscription request.
In certain aspects, method 1400 further includes obtaining capability information associated with the UE. In certain aspects, method 1400 further includes, in response to obtaining the capability information, providing, to the second network entity, an indication of a configuration associated with the cross-node machine learning session for the UE.
In certain aspects, providing the indication of the configuration for the UE comprises providing the indication of the configuration via a RIC control request.
In certain aspects, method 1400 further includes selecting a machine learning function or model for the UE to use for the cross-node machine learning session based at least in part on the capability information, wherein the indication of the configuration comprises an indication of the selected machine learning model. For example, the Near-RT RIC may select the machine learning function or model that satisfies the capability information associated with the UE.
In certain aspects, method 1400 further includes obtaining a RIC query message requesting to initiate the cross-node machine learning session between the UE and the first network entity. Block 1405 includes providing the indication of the cross-node machine learning information via a RIC query response in response to the RIC query message.
In certain aspects, method 1400 further includes obtaining, from the second network entity, an indication of the cross-node machine learning session between the first network entity and the UE, wherein controlling the cross-node machine learning session is based at least in part on the indication of the cross-node machine learning session between the UE and the first network entity. In certain aspects, the indication of the cross-node machine learning session between the UE and the first network entity comprises a UE identifier associated with the UE and one or more machine learning models used at the UE for the cross-node machine learning session.
In certain aspects, method 1400 further includes providing, to the second network entity, an indication to report status information associated with the UE. In certain aspects, method 1400 further includes obtaining, from the second network entity, the status information associated with the UE. In certain aspects, method 1400 further includes, in response to obtaining the status information, providing, to the second network entity, an indication of a configuration associated with the cross-node machine learning session for the UE.
In certain aspects, the first network entity comprises a RIC, such as a Near-RT RIC, a Non-RT RIC, and/or a RT RIC in a cloud-based RAN; and the second network entity comprises a CU, a DU, and/or an RU in communication with the first network entity via an E2 interface and/or an O1 interface.
In certain aspects, block 1415 includes determining a model structure based at least in part on the machine learning information; and providing, to the second network entity, an indication of the determined model structure to be used by the UE.
In certain aspects, method 1400 further includes performing a cross-node machine learning inference that is based at least in part on the machine learning information to generate output data.
In certain aspects, method 1400 further includes providing the output data to the second network entity.
In certain aspects, the machine learning information comprises encoded channel state information generated at the UE; and the output data comprises decoded channel state information associated with a communication link between the UE and the second network entity.
In certain aspects, method 1400, or any aspect related to it, may be performed by an apparatus, such as communications device 1700 of
Communications device 1700 is described below in further detail.
Note that
Method 1500 begins at block 1505 with obtaining, from a second network entity (e.g., the Near-RT RIC 225), an indication of cross-node machine learning information used for a cross-node machine learning session between the second network entity and a UE.
Method 1500 then proceeds to block 1510 with providing, to the UE, a configuration for the cross-node machine learning session based at least in part on the cross-node machine learning information.
Method 1500 then proceeds to block 1515 with obtaining machine learning information associated with the UE.
Method 1500 then proceeds to block 1520 with providing, to the second network entity, the machine learning information.
Method 1500 then proceeds to block 1525 with obtaining, from the second network entity, output data generated from the machine learning information.
Method 1500 then proceeds to block 1530 with communicating with the UE based at least in part on the output data.
In certain aspects, the cross-node machine learning information comprises one or more parameters supported by the second network entity in association with the cross-node machine learning session.
In certain aspects, block 1505 includes obtaining the indication of the cross-node machine learning information via a RIC subscription request.
In certain aspects, method 1500 further includes providing, to the second network entity, capability information associated with the UE. In certain aspects, method 1500 further includes, in response to providing the capability information, obtaining, from the second network entity, an indication of the configuration associated with the cross-node machine learning session for the UE.
In certain aspects, obtaining the indication of the configuration for the UE comprises obtaining the indication of the configuration via a RIC control request.
In certain aspects, method 1500 further includes providing, to the second network entity, a RIC query message requesting to initiate the cross-node machine learning session between the UE and the second network entity. In certain aspects, block 1505 includes obtaining the indication of the cross-node machine learning information via a RIC query response in response to the RIC query message.
In certain aspects, method 1500 further includes selecting the configuration for the cross-node machine learning information based at least in part on the indication of the cross-node machine learning information.
In certain aspects, the indication of the cross-node machine learning session between the UE and the second network entity comprises a UE identifier associated with the UE and one or more machine learning functions or models used at the UE for the cross-node machine learning session.
In certain aspects, method 1500 further includes obtaining, from the second network entity, an indication to report status information associated with the UE. In certain aspects, method 1500 further includes providing, to the second network entity, the status information associated with the UE. In certain aspects, method 1500 further includes, in response to providing the status information, obtaining, from the second network entity, an indication of the configuration associated with the cross-node machine learning session for the UE.
In certain aspects, the first network entity comprises a CU, a DU, and/or an RU in communication with the first network entity via an E2 interface and/or an O1 interface; the second network entity comprises a RIC, such as a Near-RT RIC, a Non-RT RIC, and/or a RT RIC in a cloud-based RAN.
In certain aspects, the output data comprises decoded channel state information associated with a communication link between the UE and the first network entity.
In certain aspects, method 1500 further includes controlling the communication link between the UE and the first network entity based at least in part on the decoded channel state information. The E2 node may update certain settings (e.g., MCS, channel bandwidth, subcarrier spacing, code rate, etc.) associated with the communication link in response to the channel state information.
In certain aspects, method 1500, or any aspect related to it, may be performed by an apparatus, such as communications device 1800 of
Note that
Method 1600 begins at block 1605 with providing, to a first network entity (e.g., the E2 node 568), capability information associated with a cross-node machine learning session between the apparatus and a second network entity.
Method 1600 then proceeds to block 1610 with obtaining, from the first network entity, a configuration for the cross-node machine learning session.
Method 1600 then proceeds to block 1615 with communicating with the second network entity (e.g., the Near-RT RIC 225) in accordance with the configuration for the cross-node machine learning session.
In certain aspects, the capability information indicates one or more parameters supported by the apparatus in association with the cross-node machine learning session. The one or more parameters may include, for example, parameter(s) for a session configuration including one or more AI/ML functions (e.g., CSF, beam management, and/or positioning), AI/ML features (or feature groups) (e.g., certain features associated with a function), AI/ML models, AI/ML model structures, etc. to use for the cross-node AI/ML session (e.g., via identifier(s) or name(s) associated with such AI/ML settings).
In certain aspects, method 1600 further includes providing, to the second network entity, a request to initiate the cross-node machine learning session between the apparatus and the third network entity. In certain aspects, the request comprises an indication of a session configuration for the cross-node machine learning session. In certain aspects, the configuration indicates one or more settings for communicating with the second network entity via a user-plane communication link.
In certain aspects, block 1615 includes communicating with the second network entity via a user-plane communication link associated with the cross-node machine learning session.
In certain aspects, block 1615 includes communicating with the first network entity via a wireless communication link.
In certain aspects, method 1600 further includes providing, to the second network entity, machine learning information (e.g., AI/ML input data including encoded CSF) via at least the wireless communication link.
In certain aspects, method 1600 further includes obtaining, from the first network entity, signaling controlling the wireless communication link in response to the machine learning information.
In certain aspects, the first network entity comprises a CU, a DU, and/or an RU in communication with the first network entity via an E2 interface and/or an O1 interface; and the second network entity comprises a RIC, such as a Near-RT RIC, a Non-RT RIC, and/or a RT RIC in a cloud-based RAN.
In certain aspects, method 1600, or any aspect related to it, may be performed by an apparatus, such as communications device 1900 of
Note that
The communications device 1700 includes a processing system 1705 coupled to a transceiver 1775 (e.g., a transmitter and/or a receiver) and/or a network interface 1785. The transceiver 1775 is configured to transmit and receive signals for the communications device 1700 via an antenna 1780, such as the various signals as described herein. The network interface 1785 is configured to obtain and send signals for the communications device 1700 via communications link(s), such as a backhaul link, midhaul link, and/or fronthaul link as described herein, such as with respect to
The processing system 1705 includes one or more processors 1710. In various aspects, one or more processors 1710 may be representative of one or more of receive processor 338, transmit processor 320, TX MIMO processor 330, and/or controller/processor 340, as described with respect to
In the depicted example, the computer-readable medium/memory 1740 stores code for providing 1745, code for obtaining 1750, code for controlling 1755, code for selecting 1760, and code for performing 1765. Processing of the code 1745-1765 may enable and cause the communications device 1700 to perform the method 1400 described with respect to
The one or more processors 1710 include circuitry configured to implement (e.g., execute) the code stored in the computer-readable medium/memory 1740, including circuitry for providing 1715, circuitry for obtaining 1720, circuitry for controlling 1725, circuitry for selecting 1730, and circuitry for performing 1735. Processing with circuitry 1715-1735 may enable and cause the communications device 1700 to perform the method 1400 described with respect to
More generally, means for communicating, transmitting, sending or outputting for transmission may include the transceivers 332, antenna(s) 334, transmit processor 320, TX MIMO processor 330, and/or controller/processor 340 of the BS 102 illustrated in
The communications device 1800 includes a processing system 1805 coupled to a transceiver 1875 (e.g., a transmitter and/or a receiver) and/or a network interface 1885. The transceiver 1875 is configured to transmit and receive signals for the communications device 1800 via an antenna 1880, such as the various signals as described herein. The network interface 1885 is configured to obtain and send signals for the communications device 1800 via communications link(s), such as a backhaul link, midhaul link, and/or fronthaul link as described herein, such as with respect to
The processing system 1805 includes one or more processors 1810. In various aspects, one or more processors 1810 may be representative of one or more of receive processor 338, transmit processor 320, TX MIMO processor 330, and/or controller/processor 340, as described with respect to
In the depicted example, the computer-readable medium/memory 1840 stores code for obtaining 1845, code for providing 1850, code for communicating 1855, code for selecting 1860, and code for controlling 1865. Processing of the code 1845-1865 may enable and cause the communications device 1800 to perform the method 1500 described with respect to
The one or more processors 1810 include circuitry configured to implement (e.g., execute) the code stored in the computer-readable medium/memory 1840, including circuitry for obtaining 1815, circuitry for providing 1820, circuitry for communicating 1825, circuitry for selecting 1830, and circuitry for controlling 1835. Processing with circuitry 1815-1835 may enable and cause the communications device 1800 to perform the method 1500 described with respect to
More generally, means for communicating, transmitting, sending or outputting for transmission may include the transceivers 332, antenna(s) 334, transmit processor 320, TX MIMO processor 330, and/or controller/processor 340 of the BS 102 illustrated in
The communications device 1900 includes a processing system 1905 coupled to a transceiver 1955 (e.g., a transmitter and/or a receiver). The transceiver 1955 is configured to transmit and receive signals for the communications device 1900 via an antenna 1960, such as the various signals as described herein. The processing system 1905 may be configured to perform processing functions for the communications device 1900, including processing signals received and/or to be transmitted by the communications device 1900.
The processing system 1905 includes one or more processors 1910. In various aspects, the one or more processors 1910 may be representative of one or more of receive processor 358, transmit processor 364, TX MIMO processor 366, and/or controller/processor 380, as described with respect to
In the depicted example, computer-readable medium/memory 1930 stores code for providing 1935, code for obtaining 1940, and code for communicating 1945. Processing of the code 1935-1945 may enable and cause the communications device 1900 to perform the method 1600 described with respect to
The one or more processors 1910 include circuitry configured to implement (e.g., execute) the code stored in the computer-readable medium/memory 1930, including circuitry for providing 1915, circuitry for obtaining 1920, and circuitry for communicating 1925. Processing with circuitry 1915-1925 may enable and cause the communications device 1900 to perform the method 1600 described with respect to
More generally, means for communicating, transmitting, sending or outputting for transmission may include the transceivers 354, antenna(s) 352, transmit processor 364, TX MIMO processor 366, and/or controller/processor 380 of the UE 104 illustrated in
Implementation examples are described in the following numbered clauses:
The preceding description is provided to enable any person skilled in the art to practice the various aspects described herein. The examples discussed herein are not limiting of the scope, applicability, or aspects set forth in the claims. Various modifications to these aspects will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other aspects. For example, changes may be made in the function and arrangement of elements discussed without departing from the scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as appropriate. For instance, the methods described may be performed in an order different from that described, and various actions may be added, omitted, or combined. Also, features described with respect to some examples may be combined in some other examples. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method that is practiced using other structure, functionality, or structure and functionality in addition to, or other than, the various aspects of the disclosure set forth herein. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim.
The various illustrative logical blocks, modules and circuits described in connection with the present disclosure may be implemented or performed with a general purpose processor, an AI processor, a digital signal processor (DSP), an ASIC, a field programmable gate array (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, a system on a chip (SoC), or any other such configuration.
As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c).
As used herein, the term “determining” encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” may include resolving, selecting, choosing, establishing and the like.
As used herein, “coupled to” and “coupled with” generally encompass direct coupling and indirect coupling (e.g., including intermediary coupled aspects) unless stated otherwise. For example, stating that a processor is coupled to a memory allows for a direct coupling or a coupling via an intermediary aspect, such as a bus.
The methods disclosed herein comprise one or more actions for achieving the methods. The method actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of actions is specified, the order and/or use of specific actions may be modified without departing from the scope of the claims. Further, the various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application specific integrated circuit (ASIC), or processor.
The following claims are not intended to be limited to the aspects shown herein, but are to be accorded the full scope consistent with the language of the claims. Reference to an element in the singular is not intended to mean only one unless specifically so stated, but rather “one or more.” The subsequent use of a definite article (e.g., “the” or “said”) with an element (e.g., “the processor”) is not intended to invoke a singular meaning (e.g., “only one”) on the element unless otherwise specifically stated. For example, reference to an element (e.g., “a processor,” “a controller,” “a memory,” “a transceiver,” “an antenna,” “the processor,” “the controller,” “the memory,” “the transceiver,” “the antenna,” etc.), unless otherwise specifically stated, should be understood to refer to one or more elements (e.g., “one or more processors,” “one or more controllers,” “one or more memories,” “one more transceivers,” etc.). The terms “set” and “group” are intended to include one or more elements, and may be used interchangeably with “one or more.” Where reference is made to one or more elements performing functions (e.g., steps of a method), one element may perform all functions, or more than one element may collectively perform the functions. When more than one element collectively performs the functions, each function need not be performed by each of those elements (e.g., different functions may be performed by different elements) and/or each function need not be performed in whole by only one element (e.g., different elements may perform different sub-functions of a function). Similarly, where reference is made to one or more elements configured to cause another element (e.g., an apparatus) to perform functions, one element may be configured to cause the other element to perform all functions, or more than one element may collectively be configured to cause the other element to perform the functions. Unless specifically stated otherwise, the term “some” refers to one or more. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims.