The subject matter disclosed herein generally relates to wireless communications, and more particularly relates to 5GS assisted adaptive AI or ML operation.
The following abbreviations are herewith defined, at least some of which are referred to within the following description: New Radio (NR), Very Large Scale Integration (VLSI), Random Access Memory (RAM), Read-Only Memory (ROM), Erasable Programmable Read-Only Memory (EPROM or Flash Memory), Compact Disc Read-Only Memory (CD-ROM), Local Area Network (LAN), Wide Area Network (WAN), User Equipment (UE), Evolved Node B (eNB), Next Generation Node B (gNB), Uplink (UL), Downlink (DL), Central Processing Unit (CPU), Graphics Processing Unit (GPU), Field Programmable Gate Array (FPGA), Orthogonal Frequency Division Multiplexing (OFDM), Radio Resource Control (RRC), User Entity/Equipment (UE), Mobile Terminal (MT), artificial intelligence (AI), machine learning (ML), convolutional neural network (CNN), Augmented Reality (AR), Virtual Reality (VR), frames per second (FPS), 5G Core (5GC), 5G system (5GS), Quality of Service (QOS), end to end (E2E), Technical Specification (TS), Technical Report (TR), Radio Access Network (RAN), The interface between the gNB and the 5G-CN (NG), Next Generation Radio Access Network (NG-RAN), Protocol Data Unit (PDU), modulation and coding (MCS), Channel State Information (CSI), UE route selection policy (URSP), Route Selection Descriptor (RSD), Non-Access Stratum (NAS), Session and Service Continuity (SSC), Single Network Slice Selection Assistance Information (S-NSSAI), data network (DN), Application Function (AF), Policy Control Function (PCF), Access and Mobility Management Function (AMF), QOS Flow ID (QFI), Network Exposure Function (NEF), Session Management Function (SMF), Policy and Charging Control (PCC), slice/service type (SST), Data Network Name (DNN), Public Land Mobile Network (PLMN), Information Element (IE), Centralized Unit (CU), Distributed Unit (DU), Protocol Data Unit (PDU).
AI/ML-based (which means AI-based or ML-based, where AI refers to artificial intelligence, and ML refers to machine learning) mobile applications are increasingly computation-intensive, memory-consuming and power-consuming. Meanwhile, end devices (e.g. UE) usually have stringent energy consumption, compute and memory limitations for running a complete offline AI/ML inference on-board. The intention of AI/ML operational splitting is to offload computation-intensive and energy-intensive training parts to network endpoints, whereas the privacy-sensitive and delay-sensitive training parts are left at the end device.
Convolutional neural network (CNN) models have been widely used for image and/or video recognition tasks on mobile devices, e.g. image classification, image segmentation, object localization and detection, face authentication, action recognition, enhanced photography, augmented reality (AR) and virtual reality (VR), video games. One example of AI/ML operation splitting for image recognition between UE (i.e., end device) and AI/ML application server (i.e., network server) is shown in
For AI/ML splitting operation, the device executes the inference up to a specific CNN layer and sends the intermediate data to the network server. The network server runs through the remaining CNN layers and sends the inference result back to the device. The intermediate data size transferred from UE to the AI/ML application server depends on the location of the split point. Assume that the 227×227 images from a video stream with 30 frames per second (FPS) need to be classified. With AlexNet model, the required (or requested) UL data rate for different split points ranges from 4.8 to 65 Mbit/s (listed in Table 1). Different AI/ML model has different UL data rate requirement based on different split points. With VGG-16 model, the required (or requested) UL data rate for different split points ranges 24 to 720 Mbit/s (listed in Table 2). In case of images with a higher resolution, higher data rates would be required.
It can be seen that for AI/ML model splitting, the required UL data rate is related to the AI/ML model, model split point, image resolution and the frame rate for the image recognition. That is, awareness of UE UL data rate at AI/ML application server is beneficial for it to determine the AI/ML model and splitting point.
Image recognition is an area where a rich set of pre-trained AI/ML models are available. Due to the limited storage resource in device, the device needs to download the AI/ML model from the network before the image recognition task can start. The typical sizes of typical deep neural network models for image recognition are listed in Table 3.
As shown in Table 3, if the downloading latency is 1 s, the required DL data rate ranges from 134.4 Mbps to 1.92 Gbps in case of 32-bit parameters. In case of 8-bit parameters, the required DL data rate can be limited to 33.6 Mbps˜1.1 Gbps.
It can be seen that, for AI/ML model downloading, the required DL data rate is related to the AI/ML model and the parameter quantify size. That is, awareness of UE DL data rate at AI/ML application server is beneficial for it to determine the AI/ML model for downloading.
The use cases and potential requirements for 5G system support AI/ML model distribution and transfer (download, upload, updates, etc.) have been described in TR 22.874. Three aspects of AI/ML service are included:
The consolidated potential requirements in TR 22.874 stated that the 5G system shall be able to expose QoS information to an authorized 3rd party. The QoS information can include e.g., UE UL and/or DL bitrate, latency, reliability per location. In the work task of R18 SA2 AI/ML topic, it is stated that, 5GS information exposure extensions for 5GC NF(s) targets to expose UE and/or network conditions and performance prediction (e.g., location, QoS, load, congestion, etc.) and whether and how to expose such information to the UE and/or to the authorized 3rd party to assist the Application AI/ML operation.
It is assumed that AI/ML application server determines whether the UE should download the AI/ML model and perform inference by itself or AI/ML model split should be performed, and the inference results generated by the AI/ML application server should be feedback to the UE. So, the AI/ML Application server should be aware of UE UL and/or DL data rates. Meanwhile, if it is the application in UE determines the AI/ML operation, application in UE should be aware of UE UL and/or DL data rate.
In order to enable awareness of UE UL and/or DL data rates at AI/ML application server, 5GS would be triggered to provide UE UL and/or DL data rates (e.g. the supported UL and/or DL data rates, or the predicted UL and/or DL data rates), which can also be referred to as AI/ML assistant information.
The present disclosure aims at the above issues.
Method and apparatus for 5GS assisted adaptive AI or ML operation are disclosed.
In one embodiment, an SMF comprises a processor; and a transceiver coupled to the processor, wherein the processor is configured to: receive, via the transceiver, app assistant information from a NG-RAN node, wherein the app assistant information includes at least one of predicted UL data rate of UE and predicted DL data rate of UE; and transmit, via the transceiver, the app assistant information to an AF.
In one embodiment, the processor is further configured to transmit, via the transceiver, an app assistant information request to the NG-RAN node. The app assistant information request may be constructed based on an app assistant information request obtained from another network entity, e.g. based on the app assistant information request included in PCC rules from PCF. The app assistant information request may alternatively be constructed based on determination that the app assistant information is necessary for an established PDU session or QoS flow, e.g. based on specific combination of at least one of 5QI, S-NSSAI, SST, DNN, and application identifier according to pre-configuration. The app assistant information request may further alternatively be constructed based on an app assistant information request received from UE. In some embodiment, the app assistant information request is contained in N2 SM information to be transmitted to the NG-RAN node via AMF.
In some embodiment, the app assistant information request may include a request for UL/DL data rate, or a request for data rate monitoring. The request for data rate monitoring may include at least one of (1) type of monitoring, (2) reporting levels, (3) reporting periodicity, (4) reporting stop timer and (5) the required UL/DL data rate.
In some embodiment, the app assistant information is exposed directly to the AF or via NEF to the AF.
In another embodiment, an AF comprises a processor; and a transceiver coupled to the processor, wherein the processor is configured to: receive, via the transceiver, app assistant information from SMF, wherein the app assistant information includes at least one of predicted UL data rate of UE and predicted DL data rate of UE; and determine operation of a traffic at least based on the received app assistant information
In one embodiment, the processor is further configured to transmit, via the transceiver, an app assistant information request to SMF. For example, the app assistant information request is transmitted to PCF so as to be included in PCC rules to the SMF. The app assistant information request may include a request for UL/DL data rate, or a request for data rate monitoring. The request for data rate monitoring may include at least one of (1) type of monitoring, (2) reporting levels, (3) reporting periodicity, (4) reporting stop timer and (5) the required UL/DL data rate.
In yet another embodiment, an NG-RAN node comprises a processor; and a transceiver coupled to the processor, wherein the processor is configured to: receive, via the transceiver, an app assistant information request from SMF or UE; and transmit, via the transceiver, app assistant information to the SMF or UE, wherein the app assistant information includes at least one of predicted UL data rate of UE and predicted DL data rate of UE.
In one embodiment, the app assistant information request is received via AMF by being included in N2 SM information provided by SMF. In some embodiment, the app assistant information request includes a request for UL/DL data rate. In some embodiment, the app assistant information request includes a request for data rate monitoring, wherein the processor is further configured to monitor a data rate, and transmit, via the transceiver, the app assistant information periodically or when the monitored data rate meets a condition. The request for data rate monitoring may include at least one of (1) type of monitoring, (2) reporting levels, (3) reporting periodicity, (4) reporting stop timer and (5) the required UL/DL data rate.
A more particular description of the embodiments briefly described above will be rendered by referring to specific embodiments that are illustrated in the appended drawings. Understanding that these drawings depict only some embodiments, and are not therefore to be considered to be limiting of scope, the embodiments will be described and explained with additional specificity and detail through the use of the accompanying drawings, in which:
As will be appreciated by one skilled in the art that certain aspects of the embodiments may be embodied as a system, apparatus, method, or program product. Accordingly, embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may generally all be referred to herein as a “circuit”, “module” or “system”. Furthermore, embodiments may take the form of a program product embodied in one or more computer readable storage devices storing machine-readable code, computer readable code, and/or program code, referred to hereafter as “code”. The storage devices may be tangible, non-transitory, and/or non-transmission. The storage devices may not embody signals. In a certain embodiment, the storage devices only employ signals for accessing code.
Certain functional units described in this specification may be labeled as “modules”, in order to more particularly emphasize their independent implementation. For example, a module may be implemented as a hardware circuit comprising custom very-large-scale integration (VLSI) circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices or the like.
Modules may also be implemented in code and/or software for execution by various types of processors. An identified module of code may, for instance, include one or more physical or logical blocks of executable code which may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together, but, may include disparate instructions stored in different locations which, when joined logically together, include the module and achieve the stated purpose for the module.
Indeed, a module of code may contain a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. Similarly, operational data may be identified and illustrated herein within modules and may be embodied in any suitable form and organized within any suitable type of data structure. This operational data may be collected as a single data set, or may be distributed over different locations including over different computer readable storage devices. Where a module or portions of a module are implemented in software, the software portions are stored on one or more computer readable storage devices.
Any combination of one or more computer readable medium may be utilized. The computer readable medium may be a computer readable storage medium. The computer readable storage medium may be a storage device storing code. The storage device may be, for example, but need not necessarily be, an electronic, magnetic, optical, electromagnetic, infrared, holographic, micromechanical, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
A non-exhaustive list of more specific examples of the storage device would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or Flash Memory), portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer-readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Code for carrying out operations for embodiments may include any number of lines and may be written in any combination of one or more programming languages including an object-oriented programming language such as Python, Ruby, Java, Smalltalk, C++, or the like, and conventional procedural programming languages, such as the “C” programming language, or the like, and/or machine languages such as assembly languages. The code may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the very last scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Reference throughout this specification to “one embodiment”, “an embodiment”, or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases “in one embodiment”, “in an embodiment”, and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment, but mean “one or more but not all embodiments” unless expressly specified otherwise. The terms “including”, “comprising”, “having”, and variations thereof mean “including but are not limited to”, unless otherwise expressly specified. An enumerated listing of items does not imply that any or all of the items are mutually exclusive, otherwise unless expressly specified. The terms “a”, “an”, and “the” also refer to “one or more” unless otherwise expressly specified.
Furthermore, described features, structures, or characteristics of various embodiments may be combined in any suitable manner. In the following description, numerous specific details are provided, such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of embodiments. One skilled in the relevant art will recognize, however, that embodiments may be practiced without one or more of the specific details, or with other methods, components, materials, and so forth. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid any obscuring of aspects of an embodiment.
Aspects of different embodiments are described below with reference to schematic flowchart diagrams and/or schematic block diagrams of methods, apparatuses, systems, and program products according to embodiments. It will be understood that each block of the schematic flowchart diagrams and/or schematic block diagrams, and combinations of blocks in the schematic flowchart diagrams and/or schematic block diagrams, can be implemented by code. This code may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which are executed via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the schematic flowchart diagrams and/or schematic block diagrams for the block or blocks.
The code may also be stored in a storage device that can direct a computer, other programmable data processing apparatus, or other devices, to function in a particular manner, such that the instructions stored in the storage device produce an article of manufacture including instructions which implement the function specified in the schematic flowchart diagrams and/or schematic block diagrams block or blocks.
The code may also be loaded onto a computer, other programmable data processing apparatus, or other devices, to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the code executed on the computer or other programmable apparatus provides processes for implementing the functions specified in the flowchart and/or block diagram block or blocks.
The schematic flowchart diagrams and/or schematic block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of apparatuses, systems, methods and program products according to various embodiments. In this regard, each block in the schematic flowchart diagrams and/or schematic block diagrams may represent a module, segment, or portion of code, which includes one or more executable instructions of the code for implementing the specified logical function(s).
It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the Figures. For example, two blocks shown in succession may substantially be executed concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. Other steps and methods may be conceived that are equivalent in function, logic, or effect to one or more blocks, or portions thereof, to the illustrated Figures.
Although various arrow types and line types may be employed in the flowchart and/or block diagrams, they are understood not to limit the scope of the corresponding embodiments. Indeed, some arrows or other connectors may be used to indicate only the logical flow of the depicted embodiment. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted embodiment. It will also be noted that each block of the block diagrams and/or flowchart diagrams, and combinations of blocks in the block diagrams and/or flowchart diagrams, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and code.
The description of elements in each Figure may refer to elements of proceeding figures. Like numbers refer to like elements in all figures, including alternate embodiments of like elements.
The input and output for AI/ML application server are summarized in Table 4.
For AI/ML model download, AI/ML application server should collect input from UE and from 5GS and make the decision on AI/ML model and the parameter quantify size. The input from UE includes the requested AI/ML model, the requested parameter quantify size, the available computation and energy resource, memory limitation etc. The input from UE may be transmitted over application layer, which is out of the scope of this disclosure. The input from 5GS includes the predicted DL data rate (from NG-RAN node, e.g., gNB), and optionally the predicted UL and/or DL packet delay. The predicted DL data rate can be regarded as the supported or available data rate that can be provided by the 5G network to the UE. Alternatively, the predicted data rate can be provided by the gNB for QoS flow or PDU session. Reversely, AI/ML application server should also provide 5GS with the required DL data rate or required bandwidth and even requested (or required) UL and/or DL packet delay as part of the QoS requirement. For the application layer, the AI/ML application server provides application in UE (note that application in UE can be expressed as application/UE) with the AI/ML model and the parameter quantify size.
For AI/ML model splitting, AI/ML application server should collect input from UE and from 5GS and make the decision on AI/ML model and the splitting point, and optionally the required image resolution and FPS. The input from UE includes image resolution, FPS, the available computation and energy resource, memory limitation, the required E2E latency, privacy protection requirement, the available AI/ML model in the UE etc. The input from 5GS includes predicted UL data rate, and optionally the predicted UL/DL packet delay. Reversely, AI/ML application server should also provide 5GS with the required (or requested) UL and/or DL data rate or bandwidth and even requested (or required) UL and/or DL packet delay as part of the QoS requirement. For the application layer, the AI/ML application server provides application/UE with the AI/ML model and the splitting point.
Actually, either application in UE (i.e. application/UE) or AI/ML application server can make the decision on AI/ML operation. If application/UE makes the decision, input from 5GS may be sent to UE directly in RRC message by NG-RAN node. Alternatively, input from 5GS may be sent to UE in NAS message by 5G core network entity, e.g., SMF or AMF. That is, NG-RAN node provides 5GC with the necessary information, and 5GC forwards it to UE in NAS message. For the case of AI/ML model downloading, application/UE provides AI/ML application server with the AI/ML model and the parameter quantify size over application layer. For the case of AI/ML model splitting, application/UE provides AI/ML application server with the AI/ML model and the splitting point over application layer. In the following embodiments, the AI/ML application server is taken as the decision maker as an example.
Incidentally, how to predict the DL data rate and/or the UL data rate for the UE or for the PDU session is out of scope of this disclosure. One example may be: gNB obtains the measurement results (e.g., channel state information, CSI) reported by UE, and gNB determines the modulation and coding (MCS) based on CSI. DL data rate can be predicted based on the MCS and available DL bandwidth. Besides, the number of concurrent active UE and PDU sessions can be considered when predicting the available DL bandwidth. Regarding the predicted UL and/or DL packet delay, gNB may obtain the statistics of the UL and/or DL packet delay of the existing UEs and makes the prediction for the new UE or the gNB performs QoS monitoring for packet delay as the specification states. Another example is: gNB calculates the actual UL and/or DL data rate based on the statistics as the predicted UL and/or DL data rate. In this disclosure, it is assumed that the gNB can predict the DL data rate and/or the UL data rate.
As mentioned in the background part, UL data rate is related to the AI/ML model, model split point, image resolution and the frame rate for the image recognition; while DL data rate is related to the AI/ML model and the parameter quantify size. So, the AI/ML application server expects to know the predicted UL data rate and the predicted DL data rate depending on actual use. To simplify description, unless specifically indicated, both “data rate” and “UL/DL data rate” means UL data rate and/or DL data rate for UE. Similarly, “UL/DL packet delay” means UL packet delay and/or DL packet delay for UE.
The application AI/ML traffic can be transmitted exclusively over AI/ML specific PDU session or in a regular PDU session where regular 5GS user traffic (i.e. non AI/ML traffic or non-Application AI/ML traffic) can also be transmitted in the regular PDU session.
AI/ML specific PDU session can be defined as follows. UE route selection policy (URSP) should be modified to enable UE to request a new PDU session establishment regardless of whether there is an existing PDU Session that matches all components in the selected Route Selection Descriptor (RSD) or not. In URSP rules, an exclusive indication is added in the RSD associated with AI/ML specific traffic descriptor as shown in Table 5. The exclusive indication can also be called as exclusive PDU session indication, or segregation indication. The exclusive indication can be enumerated as true or false or alternatively as 1 or 0. Alternatively, the exclusive indication can only be enumerated as true or 1. The underlined parts of Table 5 show the modification to the existing RSD.
Exclusive
One single value of
Optional
No
UE context
indication
exclusive indication
(NOTE 9)
app assistant
One single value
Optional
No
UE context
information
of app assistant
(NOTE 10)
request
information request
UE evaluates the URSP rules in the order of rule precedence and determines if the AI/ML application is matching the traffic descriptor of any URSP rule. When a URSP rule is determined to be applicable for the AI/ML application, the UE selects the corresponding RSD which contains the exclusive indication. Based on the selected RSD, UE determines SSC mode, PDU session type, network slice, DNN (Data Network Name) and access type etc. of the PDU session. As indicated by the exclusive indication, UE stops or will not start searching if there is an existing PDU Session that matches all components in the selected RSD. Instead, the UE tries to establish a new PDU Session using the values specified by the selected RSD. The URSP handling layer requests the UE NAS layer to establish a PDU session providing the PDU session attributes together with the exclusive indication. Upon successful completion of the PDU session establishment, the UE NAS layer shall additionally indicate the attributes of the established PDU session (e.g., PDU session identity, SSC mode, S-NSSAI, DNN, PDU session type, access type, PDU address, exclusive indication) of a newly established PDU session to the URSP handling layer. After that, the PDU session with exclusive indication will not be evaluated by UE's URSP handling layer to match all components in the selected RSD for other newly detected application.
According to a first embodiment, it is assumed that AI/ML application server requests 5GS to provide the app assistant information, e.g., predicted UL/DL data rate and/or predicted UL/DL packet delay over air interface for the PDU session or QoS flow of the UE. In the first embodiment, AF request for app assistant information is introduced without UE impact, which applies for the existing UEs.
In step 205, the AI/ML application requests for connectivity. Note that the AI/ML application and the UE are illustrated as two components in
In step 210, the PDU session establishment procedure is performed, by which the default QoS flow is established for the established PDU session.
In step 220, the AI/ML application in UE exchanges data with the AI/ML application server over the established PDU session. For example, UE may provide the parameters (e.g. input from UE) as shown in Table 4. Besides, UE may also provide AI/ML application server with predicted UL/DL data rate based on the measurement results over application layer. The predicted UL/DL data rate calculated by UE based on the measurement results over application layer may not be accurate compared with the predicted UL/DL data rate provided by the NG-RAN node.
In step 230, AI/ML application server determines that application AI/ML traffic or other application traffic type that needs app assistant information will be transmitted over the established PDU session, and makes decision that AF request with app assistant information request is necessary to be performed. Incidentally, the present disclosure is not limited to application AI/ML traffic but also applies to other application traffic types which need the app assistant information for the application server to make better decision or adaptation on the traffic (e.g. application AI/ML traffic or application traffic type which need the app assistant information).
In step 240, AF (or AI/ML application server) sends AF request with app assistant information request (which may be contained in the required QoS), to PCF via NEF based on TS 23.502 clause 4.15.6.6 (setting up an AF session with required QoS procedure). It is assumed that AF communicates with application server (e.g. AI/ML application server) in the data network (DN) behind the 5GC. The interaction between application server and AF is out of the scope of this disclosure. In this disclosure, AF can be regarded as the representative of the AI/ML application server. The app assistant information request can be at least one of “request for UL data rate”, “request for DL data rate”, “request for UL and DL data rates”, “request for UL packet delay”, “request for DL packet delay”, and “request for UL and DL packet delays”. Alternatively to “setting up an AF session with required QoS procedure”, AF may provide the app assistant information request based on the procedure described in TS 23.502 4.3.6.4 (Transferring an AF request targeting an individual UE address to the relevant PCF). The app assistant information request can be regarded as: the AF subscribes to events of APP_ASSISTANT_INFO (see the detailed explanation with reference to Table 6 in the following step 280). AF may subscribe to events of APP_ASSISTANT_INFO in AF request separately or jointly with other events or QoS request.
In step 250, PCF initiates SM Policy Association Modification procedure as defined in TS 23.502 clause 4.16.5.2 to notify SMF about the modification of policies, which further triggers PDU session modification procedure. PCF includes the app assistant information request in the PCC rules provided to SMF. Please note that steps 240 and 250 are an implementation of transmitting the app assistant information request from AF to SMF.
In step 260, SMF configures NG-RAN node (indicated as “RAN” in
In step 270, NG-RAN node obtains the app assistant information (e.g. at least one of predicted UL data rate and predicted DL data rate) and sends the app assistant information. For example, NG-RAN node sends N2 PDU session response message with N2 SM information containing app assistant information to AMF. Then, AMF forwards the N2 SM information received from NG-RAN node to SMF. The app assistant information can be associated with PDU session (e.g. PDU session ID) or QoS flow (e.g. QFI). If the app assistant information is associated with PDU session, NG-RAN node should provide PDU session ID and the app assistant information associated with the PDU session ID. If the app assistant information is associated with QoS flow, NG-RAN node should provide PDU session ID, QFI and the app assistant information associated with the QFI.
In step 280, SMF transmits the app assistant information obtained from NG-RAN node to AF, e.g. when exposing to AF (or via NEF). The procedure in TS 23.502 4.3.6.3 (Notification of User Plane Management Events) or 4.15.3.2.3 (NEF service operations information flow for event exposure) can be reused for SMF to provide the app assistant information to AF. SMF triggers Nsmf_EventExposure_Notify with the app assistant information. A new event APP_ASSISTANT_INFO should be defined in EventNotification. The underlined parts of Table 6 show the modification to the EventNotification contained in the Nsmf_EventExposure_Notify message, which is defined in TS 29.508 Table 5.6.2.5-1. That is, SMF may provide the appAssistantInfo (e.g., at least one of UL data rate and predicted DL data rate) for event APP_ASSISTANT_INFO. SMF sends Nsmf_EventExposure_Notify message with appAssistantInfo (i.e., app assistant information) to AF directly. Alternatively, SMF sends Nsmf_EventExposure_Notify message with appAssistantInfo to NEF. And NEF further forwards the appAssistantInfo in Nnef_EventExposure_Notify message towards AF.
appAssisantInfo
AppAssistantInfo
O
0 . . . 1
app assistant
AppAssistantInfo
information may be
included for event
“APP_ASSISTANT INFO”
In step 290, the AI/ML application server makes decision for operation of AI/ML traffic (or other application traffic type that needs app assistant information) by taking the app assistant information provided by 5GS (i.e. by SMF, or by SMF via NEF) into consideration (i.e. at least based on the app assistant information). For example, AI/ML application server makes decision on downloading or splitting, the AI/ML model and optionally the splitting point based on 5GS's input (e.g. provided by UE in step 220 over application layer, and provided by SMF (from NG-RAN node) in step 280).
According to the first embodiment, the app assistant information is triggered to be provided only once, which may be suitable for the use case that AI/ML operation last for a short time, e.g., 1s. In this situation, the app assistant information should be provided as soon as possible. In addition, a single report of the app assistant information is enough.
According to a variety of the first embodiment, AI/ML operation may remain for a long time, e.g., several minutes or hours. It means that NG-RAN node should provide the latest app assistant information to AF periodically or upon reporting condition being met.
Steps 305, 310 and 320 are the same as steps 205, 210 and 220. Detailed explanations for steps 305, 310 and 320 are omitted.
In step 330, AI/ML application server determines that AI/ML traffic or other application traffic type that needs app assistant information will be transmitted over the established PDU session, and makes decision that continuous reporting of app assistant information is necessary. For example, the AI/ML application server may determine that the AI/ML traffic would last a long time, e.g., several minutes or hours. Optionally, in step 330, the AI/ML application server may make an initial decision on downloading or splitting, the AI/ML model and optionally the splitting point based on UE's input (including the predicted UL/DL data rate provided by UE).
In step 340, AF (or AI/ML application server) sends AF request with required Qos, which includes app assistant information request, to PCF via NEF. The app assistant information request may include data rate monitoring request. The data rate monitoring request includes the type of monitoring, i.e., “UL data rate monitoring”, “DL data rate monitoring”, or “UL and DL data rate monitoring”. In addition, the data rate monitoring request may also contain the reporting parameters, e.g. the reporting condition (e.g. levels) of UL/DL data rate, the reporting periodicity, the reporting stop timer etc. If the reporting stop timer is provided, then NG-RAN node shall start the timer and stops data rate monitoring when the timer expires. In addition to providing reporting condition and/or reporting periodicity and/or reporting stop timer, a one-time reporting indication can also be included in the data rate monitoring, which is aligned with the first embodiment. Incidentally, the app assistant information request may also include the required UL/DL data rate. The required UL/DL data rate may be the requested/preferred UL/DL data rate for the AI/ML 20) traffic. The required UL/DL data rate can be included in the data rate monitoring request. Alternatively, the required UL/DL data rate may be provided separately from data rate monitoring request. For example, the required UL/DL data rate can be contained in the required QoS in AF request. Further, the data rate monitoring request and the required UL/DL data can be provided together within one AF request, or be provided in separate AF request.
The data rate monitoring request can be regarded as: the AF subscribes to events of DATA_RATE_MONITORING. AF shall subscribe to events of DATA_RATE_MONITORING in AF request separately or jointly with other events or QoS request.
As shown in Table 3, 6 levels of required DL data rates for different AI/ML models are shown, e.g., level #1=33.6 Mbps (for 1.0 MobileNet-224), level #2=54.4 Mbps (for GoogleNet), level #3=184 Mbps (for Inception-V3), level #4=200 Mbps (for ResNet-50), level #5=480 Mbps (for AlexNet and ResNet-152), level #6=1104 Mbps (for VGG16). If it is assumed that AI/ML application server decides to use Inception-V3 model, with the required DL data rate level #3 (=184 Mbps). If the NG-RAN node finds (e.g. in step 370) that the DL data rate is below level #3 (i.e. below 184 Mbps), it triggers the data rate reporting towards AF via 5GC. In short, if the data rate meets a condition (e.g. becomes larger than or smaller than a level), NG-RAN node triggers data rate reporting One way is to report the value of predicted data rate. Another way is to report the level or level range of data rate, e.g., above level 2 or in the range of level 2 and level 3.
Table 7 shows how AF subscribes data rate monitoring, which is modified from “table 5.14.2.1.6 Type QoSMonitoringinformation” in TS 29.122, with modifications underlined.
QoSMonitoringType
Enumerated (PacketDelay, DataRate,
Indicates either packet
both)
delay, data rate or both
should be monitored.
QoS Monitoring
This part defines the parameters used
Exist if the QoS
for Data rate
for data rate related QoS monitoring
monitoring type is
DataRate or both.
reqQoSMonDatarate
Levels of UL data
Array(ReportingLevel)
1 . . . N
Indicates the reporting
rate
levels of UL data rate.
Levels of DL data
Array(ReportingLevel)
1 . . . N
Indicates the reporting
rate
levels of DL data rate.
Requested UL data
Indicates the requested
rate
UL data rate
Requested DL data
Indicates the requested
rate
DL data rate
Reporting
Indicates the periodicity
periodicity
for the reporting
Reporting stop
Indicates the stop timer
timer
for reporting
QoS Monitoring
This part defines the parameters used
Exist if the QoS
for Packet Delay
for packet delay related QoS monitoring
monitoring type is
PacketDelay or both.
In Table 7, the structure of QoSMonitoringInformation is modified to include the QoS monitoring type, the parameters of QoS monitoring for data rate and the parameters of QoS monitoring for packet delay. In the QoS monitoring for data rate, reqQoSMonDatarate IE indicates the type of monitoring, i.e., UL data rate monitoring, DL data rate monitoring or both. Besides, the levels of UL/DL data rate, the reporting periodicity, the reporting stop timer and the required UL/DL data rate are provided optionally.
Alternatively, a new MonitoringInformationForDataRate can be defined by including the reqQoSMonDatarate IE. Optionally, the level of UL/DL data rate, the required UL/DL data rate, the reporting periodicity, and the reporting stop timer will also be included. By doing so, the QoSMonitoringInformation remains unchanged.
In step 350, PCF initiates SM Policy Association Modification procedure as defined in TS 23.502 clause 4.16.5.2 to notify SMF about the modification of policies, which further triggers PDU session modification procedure. PCF includes the data rate monitoring request (and optionally the required UL/DL data rate) in the PCC rules provided to SMF. Steps 340 and 350 are an implementation of transmitting the data rate monitoring request (which is an example of app assistant information request) from AF to SMF.
Existing QoS Monitoring is only applied for packet delay measurement. According to the present disclosure, data rate monitoring and optionally the required UL/DL data rate are added in the PCC rules provided by PCF, as shown in Table 8, with underlined modifications.
Requested UL data
The requested UL bitrate for the
No
None
rate
service data flow
Requested DL data
The requested DL bitrate for the
No
None
rate
service data flow
Data rate
This part describes PCC rule
Monitoring
information related with data rate
monitoring.
Data rate to be
UL data rate, DL data rate, or both
Yes
Added
measured
UL and DL data rate.
Levels of UL data
Indicates the reporting levels of UL
No
None
rate
data rate.
Levels of DL data
Indicates the reporting levels of DL
No
None
rate
data rate.
Alternatively, the QoS monitoring information in PCC rule can be modified to enable data rate monitoring. E.g., the type of monitoring, i.e., “UL data rate”, “DL data rate” and “both UL and DL data rate” can be added into the “QoS parameter(s) to be measured” IE.
In Step 360, SMF configures NG-RAN node to perform data rate monitoring request (or provides NG-RAN node with the required UL/DL data rate) via AMF. The N2 SM information contains PDU Session ID, QFI(s), the data rate monitoring request (and optionally the required UL/DL data rate). The data rate monitoring request (optionally together with the required UL/DL data rate) can be associated with PDU session or QoS flow. If the data rate monitoring request (optionally together with the required UL/DL data rate) is associated with PDU session, SMF provides PDU session ID and the data rate monitoring request (optionally together with the required UL/DL data rate) associated with the PDU session ID. If the data rate monitoring request (optionally together with the required UL/DL data rate) is associated with QoS flow, SMF provides PDU session ID, QFI and the data rate monitoring request (optionally together with the required UL/DL data rate) associated with the QFI. Upon receiving the N2 SM information, AMF may send N2 Message to the NG-RAN node. The N2 Message contains N2 SM information received from SMF and NAS message (PDU Session ID, N1 SM container (PDU Session Modification Command)). Incidentally, if the one-time reporting indication is also included in the data rate monitoring, steps 270, 280 and 290 may be performed accordingly (separated from steps 370, 380 and 390).
After receiving the data rate monitoring request, NG-RAN node shall, in step 370, monitor the UL/DL data rate for the PDU session (e.g. PDU session ID) or QoS flow (e.g. QFI) as requested. To be more specific, NG-RAN node shall predict the UL/DL data rate based on estimation or statistics as requested. The current reporting mechanism for QoS monitoring of packet delay can be reused for data rate monitoring. For example, NG-RAN node reports predicted UL/DL data rate result to the PSA UPF in the UL data packet or dummy UL packet. After that, UPF sends the data rate monitoring results to SMF. SMF forwards the data rate monitoring results to PCF or AF (via NEF). Based on the predicted UL and/or DL data rates provided by 5GS (e.g. from SMF), AI/ML application server is able to perform adaptive AI/ML operation. E.g., change the AI/ML model, the splitting point etc. It is assumed that the predicted UL and/or DL data rates provided by 5GS are more accurate than the predicted UL/DL data rate calculated by UE based on the measurement results over application layer provided in step 320.
Different from QoS monitoring for packet delay, data rate monitoring can be done by NG-RAN node without UPF. Therefore, a new reporting mechanism based on control plane can be considered without UPF involvement. When the condition as described in step 340 is met, NG-RAN node shall trigger the report of data rate as follows.
In TS 38.413 clause 9.3.1.12, the QoS Flow Level QoS Parameters should also be modified to trigger NG-RAN node to perform data rate monitoring or to provide the required UL/DL data rate. That is, data rate monitoring can be activated per QoS flow, and the required UL/DL data rate is provided per QoS flow. As shown in Table 9, the QoS Flow Level QoS Parameters sent from SMF to NG-RAN node are modified with underlined parts.
Required UL data
O
Data rate
The required UL
YES
ignore
rate
bitrate for the
service data flow
Required DL data
O
Data rate
The required DL
YES
ignore
rate
bitrate for the
service data flow
Data rate monitoring
O
ENUMERATED
Indicates to
(ULRate,
measure UL, or
DLRate, Both . . . ,
DL, or both
stop)
UL/DL data rate
for the associated
QoS flow or stop the
corresponding data
rate monitoring.
Reporting level for
O
List of data rate
Indicates the
UL data rate
reporting level for
monitoring
UL data rate
monitoring
Reporting level for
O
List of data rate
Indicates the
DL data rate
reporting level for
monitoring
DL data rate
monitoring
NG-RAN node monitors UL/DL data rate, and when the predicted UL/DL data rate meets the condition (e.g. if the data rate becomes larger than or smaller than a level), or periodically, NG-RAN node sends SMF the predicted UL/DL data rate. For example, NG-RAN node sends N2 PDU session response message with N2 SM information to AMF, and the N2 SM information contains the predicted UL/DL data rate. AMF forwards the N2 SM information received from NG-RAN node to SMF. The predicted UL/DL data rate can be associated with PDU session (e.g. PDU session ID) or QoS flow (e.g. QFI). New IEs for data rate monitoring should be added in EventNotification reported by SMF. The underlined parts of Table 10 show the modification to the EventNotification contained in the Nnef_EventExposure_Notify message, which is defined in TS 29.508 Table 5.6.2.5-1.
ulDataRates
Uinteger
O
0 . . . 1
Uplink data rate in units of Mbps.
DataRateMonitoring
DlDataRates
Uinteger
O
0 . . . 1
Uplink data rate in units of Mbps.
DataRateMonitoring
In step 380, SMF transmits the app assistant information obtained from NG-RAN node to AF, e.g. when exposing to AF (or via NEF).
In TS 23.548 clause 6.4.2, if SMF receives the indication of direct event notification from the PCF and SMF determines that the L-PSA UPF supports such reporting, the corresponding procedure should be modified to support data rate monitoring as follows. SMF sends data rate monitoring parameters and associates them with the target local NEF or local AF address to the L-PSA UPF via N4 rules. L-PSA UPF obtains data rate monitoring results from NG-RAN node and sends the notification related with data rate monitoring results over Nupf_EventExposure_Notify service operation to local NEF/AF.
In step 390, the AI/ML application server makes decision for operation of AI/ML traffic (or other application traffic type that needs app assistant information) by taking the app assistant information provided by 5GS (i.e. by SMF, or by SMF via NEF) into consideration (i.e. at least based on the app assistant information (i.e. the predicted UL/DL data rate)). For example, AI/ML application server makes decision on downloading or splitting, the AI/ML model and optionally the splitting point based on 5GS's input (e.g. provided by UE in step 320 over application layer (excluding the predicted UL/DL data rate calculated by UE based on the measurement results over application layer), and provided by SMF (from NG-RAN node) in step 380 (i.e. the predicted UL/DL data rate from NG-RAN node)).
Once the data rate is reported by SMF (from NG-RAN node) (each time step 380 is triggered), the step 390 can be performed to make decision on adaptive operation of AI/ML traffic. As mentioned in the description of step 340, if reporting stop timer is contained in the data rate monitoring request, NG-RAN node stops monitoring UL/DL data rate once the reporting stop timer is expired. Alternatively, if the reporting stop timer is not contained in the data rate monitoring request, AF may send AF request with stop indication for data rate monitoring when AF wants to stop data rate monitoring. Then PCF shall provide SMF the stop indication for data rate monitoring contained in PCC rules. Then SMF shall indicate NG-RAN node to stop the data rate monitoring in the N2 information transmitted via AMF.
In the first embodiment or the variety of the first embodiment, if handover happens to UE, the data rate monitoring request and/or the required UL/DL data rate may also be provided to the target NG-RAN node to which UE is handovered. For Xn based handover, the data rate monitoring request and/or the required UL/DL data rate should be contained in Handover request message as defined in TS 38.423. Alternatively, the data rate monitoring request and/or the required UL/DL data rate can be provided by SMF via AMF in Path Switch Request Acknowledge message as defined in TS 38.413. For N2 based handover, the data rate monitoring request and/or the required UL/DL data rate should be provided by SMF via AMF in Handover request message as defined in TS 38.413.
For CU-DU split case for gNB, the data rate monitoring request and/or the required UL/DL data rate may also be provided to from gNB-CU to gNB-DU over F1 interface. E.g., gNB-CU sends the UE CONTEXT SETUP/MODIFICATION REQUEST message to gNB-DU which contains the data rate monitoring request and/or the required UL/DL data rate.
According to a second embodiment, it is assumed that gNB is triggered by (UE via 5GC or SMF) to provide the application assistant information to 5GC (e.g. SMF), which further exposes the application assistant information to AF. In the second embodiment, UE request (via SMF) for app assistant information is introduced with UE impact, which applies for new UE.
As shown in Table 5, an indication of app assistant information request is added in the Route Selection Descriptor associated with AI/ML specific traffic descriptor. The app assistant information request indicates UE's higher layer (App layer or UE's URSP handling layer) to provide the application assistant information request to NAS layer. If it is UE's app layer to provide the request, UE's URSP handling layer should indicate the app layer to provide the request. Otherwise, UE's URSP handling layer provides the application assistant information request to NAS layer, together with the requested PDU session attributes.
Alternatively, UE is pre-configured with the specific application identifier or S-NSSAI/SST (which means S-NSSAI or SST) or DNN or traffic descriptor which requests for app assistant information. In this way, no impact on URSP rule is expected.
In step 405, the AI/ML application requests for connectivity. This step is the same as step 205.
In step 410, PDU session Establishment Request is sent to SMF. The app assistant information request can be optionally sent to SMF in step 410, for example, by being included in the PDU session Establishment Request (which is a 5GSM message). The 5GSM message (e.g. PDU session Establishment Request) is piggybacked in the 5GMM transport message (e.g. UL NAS transport message) from UE to AMF. Then, AMF forwards the 5GSM message to SMF in Nsmf_PDUSession_CreateSMContext Request. Alternative, the app assistant information request can be included in the UL NAS transport message directly. AMF forwards the app assistant information request to SMF directly in Nsmf_PDUSession_CreateSMContext Request.
After step 410, steps 4-10 of PDU session establishment procedure described in TS 23.502 4.3.2.2.1 are performed.
In step 420, SFM makes decision that app assistant information is necessary. This can be done by receiving the app assistant information request in step 410 (i.e. triggered by UE).
Alternatively, SMF may makes decision that app assistant information is necessary by pre-configuration. That is, the app assistant information request is not included in the PDU session establishment request in step 410. Instead, SMF may recognize the PDU session is established for AI/ML traffic or other application traffic type that needs app assistant information by at least four use cases described below:
Since this disclosure is not limited to AI/ML traffic and also applies to other types of traffic that needs app assistant information, the operator may define the 5QIs (or S-NSSAIs or SST or AppID) which need app assistant information. That is, SMF may be pre-configured (e.g., by OAM) with the list of 5QIs (or S-NSSAIs or SST or AppID) which need app assistant information. Besides, SMF may make decision based on the subscription data obtained from UDM. That is, the subscription data includes an indication of application assistant information request for specific 5QIs (or S-NSSAIs or SST or AppID).
Steps 430, 440 and 450 are the same as steps 260, 270 and 280.
After step 450, steps 16-21 of PDU session establishment procedure described in TS 23.502 4.3.2.2.1 are performed.
After the PDU session is established, the AI/ML application in UE exchanges data with the AI/ML application server over the established PDU session in step 460.
In step 470, the AI/ML application server makes decision for AI/ML operation by taking app assistant information provided by 5GS (e.g. provided by UE in step 460 over application layer, and provided by SMF (from NG-RAN node) in step 450) into consideration. Step 470 is the same as step 290.
Similar to the first embodiment, according to the second embodiment, the app assistant information is triggered to be provided only once, which may be only suitable for the use case that AI/ML operation last for a short time.
According to a variety of the second embodiment, AI/ML operation may remain for a long time, e.g., several minutes or hours. It means that NG-RAN node should provide the latest app assistant information to AF periodically or upon reporting condition met.
As shown in Table 5, an indication of data rate monitoring request is added in the Route Selection Descriptor associated with AI/ML specific traffic descriptor. The data rate monitoring request indicates UE's higher layer (App layer or UE's URSP handling layer) to provide the data rate monitoring request to NAS layer.
Steps 505, 510 and 520 are almost the same as steps 405, 410 and 420 except that the app assistant information request is replaced with data rate monitoring request.
Steps 530, 560, 570 and 580 are the same as steps 360, 370, 380 and 390. Step 540 is the same as step 320. In step 550, AF may make an initial decision on downloading or splitting, the AI/ML model and optionally the splitting point based on UE's input (including the predicted UL/DL data rate provided by UE) in step 540.
In particular, in step 560, NG-RAN node performs data rate monitoring, and may report the predicted UL/DL data rate periodically, or when the reporting condition is met.
According to the second embodiment or the variety of the second embodiment, the app assistant information request (or data rate monitoring request), if provided from UE to SMF, is provided by using the PDU session establishment procedure. Alternatively, the app assistant information request (or data rate monitoring request) may be provided from UE to SMF by using the PDU session modification procedure. For example, the app assistant information request (or data rate monitoring request) may be contained in NAS message in the PDU session modification procedure from UE to SMF.
According to a third embodiment, it is assumed that gNB is triggered by UE directly to provide the application assistant information to 5GC (e.g. SMF) for AI/ML traffic, which further exposes the application assistant information to AF. In the third embodiment, UE request for app assistant information is introduced with UE impact, which applies for new UE.
In step 605, the AI/ML application requests for connectivity. This step is the same as step 205.
In step 610, UE sends UL RRC message with the app assistant information request to NG-RAN Node. The UL RRC message may also contain the NAS message, e.g., PDU session establishment request. In step 610, the upper layer sends the app assistant information request to RRC layer (or RRC entity). One way is that upper layer (e.g., app layer) provides RRC layer with the app assistant information request directly, and NAS layer provides RRC layer the NAS message. Alternatively, NAS layer provides RRC layer with the app assistant information request and the NAS message.
Steps 1-13 of PDU session establishment procedure described in TS 23.502 4.3.2.2.1 are performed after step 610.
Steps 620, 630, 640 and 650 are the same as steps 440, 450, 460 and 470.
Alternatively, NG-RAN node can provide the app assistant information to UE instead of SMF. That is, NG-RAN node may include the app assistant information into the DL RRC message towards UE. After UE's RRC layer obtains the APP assistant information from NG-RAN node, UE's RRC layer forwards the app assistant information to the upper layer (e.g., app layer). UE's app layer may make the decision for AI/ML operation at least according to the app assistant information. One possible way is UE's app layer may provide RRC layer with the required UL/DL data rate. Alternatively, UE triggers PDU session modification procedure by providing SMF with the required UL/DL data rate. Further alternatively, UE informs AI/ML application server of the results over application layer, and optionally provide the data rate monitoring request and/or the required UL/DL data rate. If AI/ML application server makes the decision for AI/ML operation, UE's app layer sends the app assistant information to the AI/ML application server over application layer as shown in step 640. After that, the AI/ML application server makes decision based on the app assistant information as well as other parameters (e.g. input from UE as shown in Table 4) provided by UE over application layer.
Similar to the first or second embodiment, according to the third embodiment, the app assistant information is triggered to be provided only once, which may be suitable for the use case that AI/ML operation last for a short time.
According to a variety of the third embodiment, AI/ML operation may remain for a long time, e.g., several minutes or hours. It means that NG-RAN node should provide the latest app assistant information to AF periodically or upon reporting condition met.
Steps 705 and 710 are almost the same as steps 605 and 610 except that the app assistant information request is replaced with data rate monitoring request.
PDU session establishment procedure described in TS 23.502 4.3.2.2.1 is performed after step 710.
Steps 720, 730, 740, 750 and 760 are the same as steps 540, 550, 560, 570 and 580.
Similar to the third embodiment, NG-RAN node may provide the app assistant information to UE instead of SMF. In particular, each time the data rate shall be reported, it is provided (reported) to UE.
The method 800 comprises 802 receiving app assistant information from an NG-RAN node, wherein the app assistant information includes at least one of predicted UL data rate of UE and predicted DL data rate of UE; and 804 transmitting the app assistant information to an AF.
In one embodiment, the method further comprises transmitting an app assistant information request to the NG-RAN node. The app assistant information request may be constructed based on an app assistant information request obtained from another network entity, e.g. based on the app assistant information request included in PCC rules from PCF. The app assistant information request may alternatively be constructed based on determination that the app assistant information is necessary for an established PDU session or QoS flow, e.g. based on specific combination of at least one of 5QI, S-NSSAI, SST, DNN, and application identifier according to pre-configuration. The app assistant information request may further alternatively be constructed based on an app assistant information request received from UE. In some embodiment, the app assistant information request is contained in N2 SM information to be transmitted to the NG-RAN node via AMF.
The app assistant information request may include a request for UL/DL data rate, or a request for data rate monitoring. The request for data rate monitoring may include at least one of (1) type of monitoring, (2) reporting levels, (3) reporting periodicity, (4) reporting stop timer and (5) the required UL/DL data rate.
In some embodiment, the app assistant information is exposed directly to the AF or via NEF to the AF.
The method 900 comprises 902 receiving app assistant information from SMF, wherein the app assistant information includes at least one of predicted UL data rate of UE and predicted DL data rate of UE; and 904 determining operation of a traffic at least based on the received app assistant information.
In one embodiment, the method further comprises transmitting an app assistant information request to SMF. For example, the app assistant information request is transmitted to PCF so as to be included in PCC rules to the SMF. The app assistant information request may include a request for UL/DL data rate, or a request for data rate monitoring. The request for data rate monitoring may include at least one of (1) type of monitoring, (2) reporting levels, (3) reporting periodicity, (4) reporting stop timer and (5) the required UL/DL data rate.
The method 1000 comprises 1002 receiving an app assistant information request from SMF or UE; and 1004 transmitting app assistant information to the SMF or UE, wherein the app assistant information includes at least one of predicted UL data rate of UE and predicted DL data rate of UE.
In one embodiment, the app assistant information request is received via AMF by being included in N2 SM information provided by SMF.
In some embodiment, the app assistant information request includes a request for UL/DL data rate.
In some embodiment, the app assistant information request includes a request for data rate monitoring, wherein the method further comprises monitoring a data rate, and transmitting the app assistant information periodically or when the monitored data rate meets a condition. The request for data rate monitoring may include at least one of (1) type of monitoring, (2) reporting levels, (3) reporting periodicity, (4) reporting stop timer and (5) the required UL/DL data rate.
The method 1100 comprises 1102 obtaining URSP rules including an app assistant information indication in Route Selection Descriptor associated with descriptor of a predetermined traffic; and 1104 transmitting an app assistant information request to SMF or NG-RAN node. The URSP rules may be preconfigured in the UE. Alternatively, the URSP rules may be received from 5GS via NG-RAN node, e.g. on UE's URSP handling layer. The predetermined traffic may be AI/ML traffic. When the app assistant information request is transmitted to SMF, UE's URSP handling layer sends an app assistant information request to NAS layer, and the NAS layer includes the app assistant information request into NAS massage to SMF. When the app assistant information request is transmitted to NG-RAN node, UE's URSP handling layer sends an app assistant information request to RRC layer, and the RRC layer includes the app assistant information request into RRC message to NG-RAN node.
In some embodiment, the app assistant information request includes a request for UL/DL data rate, or a request for data rate monitoring.
In some embodiment, the method further comprises receiving app assistant information from the NG-RAN node, wherein the app assistant information includes at least one of predicted UL data rate of UE and predicted DL data rate of UE.
The method 1200 comprises 1202 obtaining URSP rules including an exclusive indication in the Route Selection Descriptor associated with descriptor of a predetermined traffic; and 1204 establishing a new PDU session for the predetermined traffic based on the exclusive indication, without evaluating whether an existing PDU session matches any URSP rule for the predetermined traffic. The URSP rules may be preconfigured in the UE. Alternatively, the URSP rules may be received from 5GS via NG-RAN node, e.g. on UE's URSP handling layer. The predetermined traffic may be AI/ML traffic.
In some embodiment, UE's NAS layer indicates attributes of the established PDU session and the exclusive indication to US's URSP handling layer. Then for a newly detected application, UE's URSP handling will not evaluate the existing PDU session with exclusive indication when the existing PDU session with exclusive indication matches all components in the selected Route Selection Descriptor.
Referring to
The SMF comprises a processor; and a transceiver coupled to the processor, wherein the processor is configured to: receive, via the transceiver, app assistant information from a NG-RAN node, wherein the app assistant information includes at least one of predicted UL data rate of UE and predicted DL data rate of UE; and transmit, via the transceiver, the app assistant information to an AF.
In one embodiment, the processor is further configured to transmit, via the transceiver, an app assistant information request to the NG-RAN node. The app assistant information request may be constructed based on an app assistant information request obtained from another network entity, e.g. based on the app assistant information request included in PCC rules from PCF. The app assistant information request may alternatively be constructed based on determination that the app assistant information is necessary for an established PDU session or QoS flow, e.g. based on specific combination of at least one of 5QI, S-NSSAI, SST, DNN, and application identifier according to pre-configuration. The app assistant information request may further alternatively be constructed based on an app assistant information request received from UE. In some embodiment, the app assistant information request is contained in N2 SM information to be transmitted to the NG-RAN node via AMF.
The app assistant information request may include a request for UL/DL data rate, or a request for data rate monitoring. The request for data rate monitoring may include at least one of (1) type of monitoring, (2) reporting levels, (3) reporting periodicity, (4) reporting stop timer and (5) the required UL/DL data rate.
In some embodiment, the app assistant information is exposed directly to the AF or via NEF to the AF.
Referring to
The AF comprises a processor; and a transceiver coupled to the processor, wherein the processor is configured to: receive, via the transceiver, app assistant information from SMF, wherein the app assistant information includes at least one of predicted UL data rate of UE and predicted DL data rate of UE; and determine operation of a traffic at least based on the received app assistant information.
In one embodiment, the processor is further configured to transmit, via the transceiver, an app assistant information request to SMF. For example, the app assistant information request is transmitted to PCF so as to be included in PCC rules to the SMF. The app assistant information request may include a request for UL/DL data rate, or a request for data rate monitoring. The request for data rate monitoring may include at least one of (1) type of monitoring, (2) reporting levels, (3) reporting periodicity, (4) reporting stop timer and (5) the required UL/DL data rate.
Referring to
The NG-RAN node comprises a processor; and a transceiver coupled to the processor, wherein the processor is configured to: receive, via the transceiver, an app assistant information request from SMF or UE; and transmit, via the transceiver, app assistant information to the SMF or UE, wherein the app assistant information includes at least one of predicted UL data rate of UE and predicted DL data rate of UE.
In one embodiment, the app assistant information request is received via AMF by being included in N2 SM information provided by SMF.
In some embodiment, the app assistant information request includes a request for UL/DL data rate.
In some embodiment, the app assistant information request includes a request for data rate monitoring, wherein the processor is further configured to monitor a data rate, and transmit, via the transceiver, the app assistant information periodically or when the monitored data rate meets a condition. The request for data rate monitoring may include at least one of (1) type of monitoring, (2) reporting levels, (3) reporting periodicity, (4) reporting stop timer and (5) the required UL/DL data rate.
Referring to
The UE (implementing the method proposed in
In some embodiment, the app assistant information request includes a request for UL/DL data rate, or a request for data rate monitoring.
In some embodiment, the processor is further configured to receive, via the transceiver, app assistant information from the NG-RAN node, wherein the app assistant information includes at least one of predicted UL data rate of UE and predicted DL data rate of UE.
The UE (implementing the method proposed in
In some embodiment, UE's NAS layer indicates attributes of the established PDU session and the exclusive indication to US's URSP handling layer. Then for a newly detected application, UE's URSP handling will not evaluate the existing PDU session with exclusive indication when the existing PDU session with exclusive indication matches all components in the selected Route Selection Descriptor.
Layers of a radio interface protocol may be implemented by the processors. The memories are connected with the processors to store various pieces of information for driving the processors. The transceivers are connected with the processors to transmit and/or receive message or information. Needless to say, the transceiver may be implemented as a transmitter to transmit the information and a receiver to receive the information.
The memories may be positioned inside or outside the processors and connected with the processors by various well-known means.
In the embodiments described above, the components and the features of the embodiments are combined in a predetermined form. Each component or feature should be considered as an option unless otherwise expressly stated. Each component or feature may be implemented not to be associated with other components or features. Further, the embodiment may be configured by associating some components and/or features. The order of the operations described in the embodiments may be changed. Some components or features of any embodiment may be included in another embodiment or replaced with the component and the feature corresponding to another embodiment. It is apparent that the claims that are not expressly cited in the claims are combined to form an embodiment or be included in a new claim.
The embodiments may be implemented by hardware, firmware, software, or combinations thereof. In the case of implementation by hardware, according to hardware implementation, the exemplary embodiment described herein may be implemented by using one or more application-specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, and the like.
Embodiments may be practiced in other specific forms. The described embodiments are to be considered in all respects to be only illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
| Filing Document | Filing Date | Country | Kind |
|---|---|---|---|
| PCT/CN2022/074679 | 1/28/2022 | WO |