This application is based on and claims priority under 35 U.S.C. § 119 to Korean Patent Application No. 10-2022-0131360, filed on Oct. 13, 2022, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein by reference in its entirety.
The present disclosure relates generally to a wireless communication system, and more particularly, to a method and an apparatus for providing artificial intelligence (AI)/machine learning (ML) media services.
5G mobile communication technologies define broad frequency bands such that high transmission rates and new services are possible, and can be implemented not only in “Sub 6 GHz” bands such as 3.5 GHz, but also in “Above 6 GHz” bands referred to as mmWave including 28 GHz and 39 GHz. In addition, it has been considered to implement 6G mobile communication technologies (referred to as Beyond 5G systems) in terahertz (THz) bands (for example, 95 GHz to 3 THz bands) in order to accomplish transmission rates fifty times faster than 5G mobile communication technologies and ultra-low latencies one-tenth of 5G mobile communication technologies.
At the beginning of the development of 5G mobile communication technologies, in order to support services and to satisfy performance requirements in connection with enhanced Mobile BroadBand (eMBB), Ultra Reliable Low Latency Communications (URLLC), and massive Machine-Type Communications (mMTC), there has been ongoing standardization regarding beamforming and massive MIMO for mitigating radio-wave path loss and increasing radio-wave transmission distances in mmWave, supporting numerologies (for example, operating multiple subcarrier spacings) for efficiently utilizing mmWave resources and dynamic operation of slot formats, initial access technologies for supporting multi-beam transmission and broadbands, definition and operation of BWP (BandWidth Part), new channel coding methods such as a LDPC (Low Density Parity Check) code for large amount of data transmission and a polar code for highly reliable transmission of control information, L2 pre-processing, and network slicing for providing a dedicated network specialized to a specific service.
Currently, there are ongoing discussions regarding improvement and performance enhancement of initial 5G mobile communication technologies in view of services to be supported by 5G mobile communication technologies, and there has been physical layer standardization regarding technologies such as V2X (Vehicle-to-everything) for aiding driving determination by autonomous vehicles based on information regarding positions and states of vehicles transmitted by the vehicles and for enhancing user convenience, NR-U (New Radio Unlicensed) aimed at system operations conforming to various regulation-related requirements in unlicensed bands, NR UE Power Saving, Non-Terrestrial Network (NTN) which is UE-satellite direct communication for providing coverage in an area in which communication with terrestrial networks is unavailable, and positioning.
Moreover, there has been ongoing standardization in air interface architecture/protocol regarding technologies such as Industrial Internet of Things (IIoT) for supporting new services through interworking and convergence with other industries, IAB (Integrated Access and Backhaul) for providing a node for network service area expansion by supporting a wireless backhaul link and an access link in an integrated manner, mobility enhancement including conditional handover and DAPS (Dual Active Protocol Stack) handover, and two-step random access for simplifying random access procedures (2-step RACH for NR). There also has been ongoing standardization in system architecture/service regarding a 5G baseline architecture (for example, service based architecture or service based interface) for combining Network Functions Virtualization (NFV) and Software-Defined Networking (SDN) technologies, and Mobile Edge Computing (MEC) for receiving services based on UE positions.
As 5G mobile communication systems are commercialized, connected devices that have been exponentially increasing will be connected to communication networks, and it is accordingly expected that enhanced functions and performances of 5G mobile communication systems and integrated operations of connected devices will be necessary. To this end, new research is scheduled in connection with eXtended Reality (XR) for efficiently supporting AR (Augmented Reality), VR (Virtual Reality), MR (Mixed Reality) and the like, 5G performance improvement and complexity reduction by utilizing AI and ML, AI service support, metaverse service support, and drone communication.
Furthermore, such development of 5G mobile communication systems will serve as a basis for developing not only new waveforms for providing coverage in terahertz bands of 6G mobile communication technologies, multi-antenna transmission technologies such as Full Dimensional MIMO (FD-MIMO), array antennas and large-scale antennas, metamaterial-based lenses and antennas for improving coverage of terahertz band signals, high-dimensional space multiplexing technology using OAM (Orbital Angular Momentum), and RIS (Reconfigurable Intelligent Surface), but also full-duplex technology for increasing frequency efficiency of 6G mobile communication technologies and improving system networks, AI-based communication technology for implementing system optimization by utilizing satellites and AI from the design stage and internalizing end-to-end AI support functions, and next-generation distributed computing technology for implementing services at levels of complexity exceeding the limit of UE operation capability by utilizing ultra-high-performance communication and computing resources.
According to an embodiment, a method performed by a user equipment (UE) in a wireless communication system is provided. The method includes receiving, from a media resource function (MRF) entity, a session description protocol (SDP) offer including a list of AI models, identifying at least one AI model from the list for outputting at least one result using first media data, based on a type of the first media data and a media service in which the at least one result is used, transmitting, to the MRF entity, an SDP response for requesting the at least one AI model as a response to the SDP offer and processing the first media data based on the at least one AI model received from the MRF entity.
According to an embodiment, a UE in a wireless communication system is provided. The UE includes a transceiver and a controller coupled with the transceiver. The controller is configured to receive, from an MRF entity, an SDP offer including a list of AI models, identify at least one AI model from the list for outputting at least one result using first media data, based on a type of the first media data and a media service in which the at least one result is used, transmit, to the MRF entity, an SDP response for requesting the at least one AI model as a response to the SDP offer, and process the first media data based on the at least one AI model received from the MRF entity.
According to an embodiment, a method performed by an MRF entity in a wireless communication system is provided. The method comprises transmitting, to a UE, an SDP offer including a list of AI models, and receiving, from the UE, an SDP response for requesting at least one AI model from the list for outputting at least one result using first media data as a response to the SDP offer. The at least one AI model is based on a type of the first media data and a media service in which the at least one result is used.
According to an embodiment, an MRF entity in a wireless communication system is provided. The MRF entity includes a transceiver and a controller coupled with the transceiver. The controller is configured to transmit, to a UE, an SDP offer including a list of AI models, and receive, from the UE, an SDP response for requesting at least one AI model from the list for outputting at least one result using first media data as a response to the SDP offer. The at least one AI model is based on a type of the first media data and a media service in which the at least one result is used.
The above and other aspects, features, and advantages of certain embodiments of the disclosure will be more apparent from the following description taken in conjunction with the accompanying drawings, in which:
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of various embodiments of the present disclosure as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the various embodiments described herein can be made without departing from the scope and spirit of the present disclosure. In addition, descriptions of well-known functions and constructions may be omitted for clarity and conciseness.
The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used by the inventor to enable a clear and consistent understanding of the present disclosure. Accordingly, it should be apparent to those skilled in the art that the following description of various embodiments of the present disclosure is provided for illustration purpose only and not for the purpose of limiting the present disclosure as defined by the appended claims and their equivalents.
Singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a component surface” includes reference to one or more of such surfaces.
The term “include” or “may include” refers to the existence of a corresponding disclosed function, operation or component which can be used in various embodiments of the present disclosure and does not limit one or more additional functions, operations, or components. The terms such as “include” and/or “have” may be construed to denote a certain characteristic, number, step, operation, constituent element, component or a combination thereof, but may not be construed to exclude the existence of or a possibility of addition of one or more other characteristics, numbers, steps, operations, constituent elements, components or combinations thereof.
The term “or” used in various embodiments of the present disclosure includes any or all of combinations of listed words. For example, the expression “A or B” may include A, may include B, or may include both A and B.
Unless defined differently, all terms used herein, which include technical terminologies or scientific terminologies, have the same meaning as that understood by a person skilled in the art to which the present disclosure belongs. Such terms as those defined in a generally used dictionary are to be interpreted to have the meanings equal to the contextual meanings in the relevant field of art, and are not to be interpreted to have ideal or excessively formal meanings unless clearly defined in the present disclosure.
Embodiments of the disclosure relate to 5G network systems for multimedia, architectures and procedures for AI/ML model transfer and delivery over 5G, AI/ML model transfer and delivery over 5G for AI enhanced multimedia services, AI/ML model selection and transfer over IMS, and AI/ML enhanced conversational services over IMS. Embodiments also relate to SDP signaling for AI/ML model delivery and AI multimedia, and time synchronization of an AI model (including AI data) and media data (video and audio) for AI media conversation/streaming services.
AI is a general concept defining the capability of a system to act based on two major conditions. The first condition is the context in which a task is performed (i.e., the value or state of different input parameters). The second condition is the past experience of achieving the same task with different parameter values and the record of potential success with each parameter value.
ML is often described as a subset of AI, in which an application has the capacity to learn from the past experience. This learning feature usually starts with an initial training phase to ensure a minimum level of performance when it is placed into service.
Recently, AI/ML has been introduced and generalized in media related applications, ranging from applications such as image classification, speech/face recognition, to more applications such as video quality enhancement. Additionally, AI applications for augmented reality (AR)/virtual reality (VR) has become ever more popular, especially in applications regarding the enhancement of photo-realistic avatars related to facial three-dimensional (3D) modelling or similar applications. As research into this field matures, more and more complex AI/ML-based applications requiring higher computational processing can be expected. Such processing involves dealing with significant amounts of data not only for the inputs and outputs into the AI/ML models, but also for the increasing data size and complexity of the AI/ML models themselves. This growing amount of AI/ML related data, together with a need for supporting processing intensive mobile applications (e.g., VR, AR/mixed reality (MR), gaming, and more), highlights the importance of handling certain aspects of AI/ML processing by the server over 5G system, in order to meet the required latency requirements of various applications.
Current implementations of AWL are enabled via applications without compatibility with other market solutions. In order to support AI/ML for multimedia applications over 5G, AI/ML models should support compatibility between UE devices and application providers from different mobile network operators (MNOs). AI/ML model delivery for AI/ML media services should support media context, UE status, and network status based selection and delivery of the AI/ML model. The processing power of UE devices is also a limitation for AI/ML media services, since next generation media services, such as AR, are typically consumed on lightweight, low processing power devices, such as AR glasses, for which long battery life is also a major design hurdle/limitation. Another limitation of current technology is a suitable method to configure the sending of AWL models and its associated data via IMS between two supporting clients (e.g., two UEs or a UE and an MRF). For many media applications which have a dynamic characteristic, such as conversational media services, or even streaming media services, the introduction of AI/ML for these services also raises an issue of synchronization between the media data streams, and the AI/ML model data streams, since the AI/ML model data may also change dynamically according to the specific characteristics of the media to be processed. In summary:
How to enable clients (e.g. MRF or UE) to identify and select data streams to be used for a given AI/ML media service using IMS?
Such streams include specifically video, audio, AI/ML model data
How to synchronize multiple streams delivered using RTP and SCPT, for a given AI/ML media service?
Embodiments provide delivery of AWL models and associated data for conversational video and audio. By defining new parameters for SDP signaling, a receiver may request only the required AI/ML models which are required for conversational service at hand.
In order to request such AI/ML models, the receiving client must be able to identify which AI/ML models are associated with the desired media data steams (e.g., video or audio) since these models are typically customized to the media stream when prepared by a content provider. In addition, more than one AI/ML model may be available for the AI processing of a certain media stream, in which case the receiving client may select a desired AI/ML model according to its capabilities, resources, or other concerning factor.
Embodiments enable UE capability, service requirement driven AI/ML model identification, selection, delivery and inference between network (MRF) and UE for conversational or real-time multimedia telephony services using IMS (MTSI). Embodiments also enable synchronization of multiple streams (e.g., video, audio, AI/ML model data) delivered using RTP and stream control transmission protocol (SCTP), for a given AI/ML media service.
However, as a packet-switched network is introduced in 4G, the voice codec is installed only in the terminal, and the voice frame compressed at intervals of 20 ms is not restored at the base station or the network node located in the middle of the transmission path and is transmitted to the counterpart terminal.
The receiving terminal selects the acceptable bit rate and the transmission method from among the bit rates proposed by the transmitting terminal. For an AI based conversational service, the receiving terminal may also select the desired configuration of AI inferencing (together with required AI models and possible intermediate data) according to that offered by the sending terminal, including these information in an SDP answer message in the SIP 183 message in order to transmit the SDP answer message to the transmitting terminal. In this case, the sending terminal may be an MRF instead of a UE device. In the process of transmitting this message to the transmitting terminal, each IMS node starts to reserve the transmission resources of the wired and wireless networks required for this service, and all the conditions of the session are agreed through additional procedures. A transmitting terminal that confirms that transmission resources of all transmission sections are secured transmits the 360 fisheye image videos to the receiving terminal.
At step 1, UE #1 inserts the codec(s) to an SDP payload. The inserted codec(s) shall reflect the UE #1's terminal capabilities and user preferences for the session capable of supporting for this session. It builds an SDP containing bandwidth requirements and characteristics of each, and assigns local port numbers for each possible media flow. Multiple media flows may be offered, and for each media flow (m=line in SDP), there may be multiple codec choices offered.
At step 2, UE #1 sends the initial INVITE message to P-CSCF #1 containing this SDP.
At step 3, P-CSCF #1 examines the media parameters. If P-CSCF #1 finds media parameters not allowed to be used within an IMS session (based on P-CSCF local policies, or if available bandwidth authorization limitation information coming from the PCRF/PCF), it rejects the session initiation attempt. This rejection shall contain sufficient information for the originating UE to re-attempt session initiation with media parameters that are allowed by local policy of P-CSCF #1's network according to the procedures specified in IETF RFC 3261 [12].
In this flow described in
Whether the P-CSCF should interact with PCRF/PCF in this step is based on operator policy.
At step 4, P-CSCF #1 forwards the INVITE message to S-CSCF #1.
At step 5, S-CSCF #1 examines the media parameters. If S-CSCF #1 finds media parameters that local policy or the originating user's subscriber profile does not allow to be used within an IMS session, it rejects the session initiation attempt. This rejection shall contain sufficient information for the originating UE to re-attempt session initiation with media parameters that are allowed by the originating user's subscriber profile and by local policy of S-CSCF #1's network according to the procedures specified in IETF RFC 3261 [12].
In this flow described in
At step 6, S-CSCF #1 forwards the INVITE, through the S-S Session Flow Procedures, to S-CSCF #2.
At step 7, S-CSCF #2 examines the media parameters. If S-CSCF #2 finds media parameters that local policy or the terminating user's subscriber profile does not allow to be used within an IMS session, it rejects the session initiation attempt. This rejection shall contain sufficient information for the originating UE to re-attempt session initiation with media parameters that are allowed by the terminating user's subscriber profile and by local policy of S-CSCF #2's network according to the procedures specified in IETF RFC 3261 [12].
In this flow described in
At step 8, S-CSCF #2 forwards the INVITE message to P-CSCF #2.
At step 9, P-CSCF #2 examines the media parameters. If P-CSCF #2 finds media parameters not allowed to be used within an IMS session (based on P-CSCF local policies, or if available bandwidth authorization limitation information coming from the PCRF/PCF), it rejects the session initiation attempt. This rejection shall contain sufficient information for the originating UE to re-attempt session initiation with media parameters that are allowed by local policy of P-CSCF #2's network according to the procedures specified in IETF RFC 3261 [12].
In this flow described in
Whether the P-CSCF should interact with PCRF/PCF in this step is based on operator policy.
At step 10, P-CSCF #2 forwards the INVITE message to UE #2.
At step 11, UE #2 determines the complete set of codecs that it is capable of supporting for this session. It determines the intersection with those appearing in the SDP in the INVITE message. For each media flow that is not supported, UE #2 inserts an SDP entry for media (m=line) with port=0. For each media flow that is supported, UE #2 inserts an SDP entry with an assigned port and with the codecs in common with those in the SDP from UE #1.
At step 12, UE #2 returns the SDP listing common media flows and codecs to P-CSCF #2.
At step 13, P-CSCF #2 authorizes the QoS resources for the remaining media flows and codec choices.
At step 14, P-CSCF #2 forwards the SDP response to S-CSCF #2.
At step 15, S-CSCF #2 forwards the SDP response to S-CSCF #1.
At step 16, S-CSCF #1 forwards the SDP response to P-CSCF #1.
At step 17, P-CSCF #1 authorizes the QoS resources for the remaining media flows and codec choices.
At step 18, P-CSCF #1 forwards the SDP response to UE #1.
At step 19, UE #1 determines which media flows should be used for this session, and which codecs should be used for each of those media flows. If there was more than one media flow, or if there was more than one choice of codec for a media flow, then UE #1 need to renegotiate the codecs by sending another offer to reduce codec to one with the UE #2.
At steps 20-24, UE #1 sends the “Offered SDP” message to UE #2, along the signaling path established by the INVITE request.
The remainder of the multi-media session completes identically to a single media/single codec session, if the negotiation results in a single codec per media.
Herein, AI inference/inferencing refers to the use of a trained AI neural network in order to yield results, by feeding into the neural network input data, which consequently returns output results. During the AI training phase, the neural network is trained with multiple data sets in order to develop intelligence, and once trained, the neural network is run, or “inferenced” using an inference engine, by feeding input data into the neural network. The intelligence gathered and stored in the trained neural network during the learning stage is used to understand such new input data.
Typical examples of AI inferencing for multimedia applications include feeding low resolution video into a trained AI neural network, which is inferenced to output high resolution video (AI upscaling), and feeding video into a trained AI neural network, which is inferenced to output labels for facial recognition in the video (AI facial recognition).
Many AI for multimedia applications involve machine vision based scenarios where object recognition is a key part of the output result from AI inferencing.
The syntax and semantics in Table 1 defines an SDP attribute 3gpp_AImedia which is included under the m-line of any media stream (e.g., video or audio) for which there is an associated AI/ML model (or models) which should be used for AI processing of the media stream. This attribute is used to identify which AI model(s) and data is relevant to the m-line media (e.g., video or audio) data stream for which this attribute is under. By the nature of the syntax and semantics as defined in table 1, this attribute can be used by a sending entity (e.g., MRF) to provide a list of possible configurations for AI/ML processing of the corresponding media stream (as specified by the m-line), from which a receiving entity (e.g., UE) can identify and select its required/wanted/desired configuration through the selection of one or more AI/ML models listed in the SDP offer. The selected models are then included in the SDP answer under the same attribute, and sent to the sending entity.
Synchronization of RTC and SCTP streams in this invention is defined as where the RTP source(s) and SCTP source(s) use the same clock. When media is delivered via RTC, and AI model data is delivered via SCTP, media captured at time t of the RTP stream is intended to be fed into the AI model data defined at time t of the SCTP stream.
According to an embodiment, at step 1011, the receiving entity may receive SDP offer containing m-lines (video or audio) with the 3gpp_AImedia.
At step 1013, the receiving entity may identify whether there is more than one AI/ML model associated for each m-line.
At step 1015, in case that more than one AI model associated with the media data does not exist, the receiving entity may identify whether the AI/ML model already available on the UE. For example, the receiving entity may identify whether the AI/ML model suitable for the processing the media data is already stored in the receiving entity.
At step 1017, in case that the AI/ML model available on the UE does not exist, the receiving entity may request the AI/ML model data stream from sending client, by including corresponding data channel sub-protocol attribute with AI/ML model identified through id in 3gpp_AImedia, and 3gpp_Almodel attributes, in the SDP answer to the sending client.
At step 1019, the receiving entity may receive AI/ML model through the data channel stream.
At step 1021, in case that the AI/ML model available on the UE exists, the receiving entity may use corresponding AI/ML model(s) to perform AI processing of the media stream. For example, the receiving entity may process the data which is delivered to the receiving entity via the media data stream, based on the AI/ML model corresponding to the media data.
At step 1023, in case that more than one AI model associated with the media data exists, the receiving entity may decide which AI/ML models are suitable by parsing parameters under 3gpp_AImedia. For example, the parameters may include task results, a device capability, service/application requirements, device/network resources and/or other factors. For example, the task results may depend on a device capability, service/application requirements, device/network resources and/or other factors.
At step 1025, the receiving entity may identify whether suitable models are already available on the UE.
At step 1027, in case that the suitable models available on the UE do not exist, the receiving entity may parse m-lines containing the 3gpp_AI model attribute, signifying available AI models at the sending entity. The receiving entity may select required AI models by the corresponding data channel m-line (selection based on parameters under same attribute). For example, the receiving entity may identify AI models based on the 3gpp_AI model attribute including information on task results, a device capability, service/application requirements, device/network resources and/or other factors.
At step 1029, the receiving entity may request the AI/ML model data streams from the sending client, by including corresponding data channel sub-protocol attribute with AI/ML models identified through ids in 3gpp-AImedia and 3gpp_Almodel attributes, in the SDP answer to sending client.
At step 1031, the receiving entity may receive the AI/ML models through data channel streams.
The step 1021 may be performed by the receiving entity after performing the step 1025 (in case that the suitable models available on the UE exists) or the step 1031.
The syntax and semantics in Table 2 is an example of a grouping attribute mechanism to enable the group of data streams for a real-time AI media service as described in
In an embodiment, the SDP offer may contain at least one group attribute which defines the grouping of RTP streams and SCTP streams. In one example, SCTP streams carry AI model data and RTP streams carry media data. Each group defined by this attribute contains information to identify exactly one media stream, and at least one associated AI model stream. The exactly one media RTC stream is identified through the mid under the media stream's corresponding m-line, and each AI model SCTP stream is identified through the mid together with the dcmap-stream-id parameter.
In another embodiment, each group defined by this attribute may contain multiple media streams, as well as multiple AI model streams.
In a further embodiment, each group defined by this attribute may contain only one media stream, and only one AI model stream.
For the grouping mechanisms defined above, RTP streams and SCTP streams may be synchronized according to the mechanisms defined in
In one embodiment, RTP streams and SCTP streams are assumed to be synchronized if associated under the same group attribute defined above.
In another embodiment, RTP streams and SCTP streams are assumed to be synchronized only if the <synchronized> parameter exists under the RTP media stream m-lines, even if the RTP streams and SCTP streams are associated under the same group attribute.
Since both video and audio media data change with time, the AI model and data used for its AI processing may also change dynamically to match these time characteristics. The interval of how often an AI model and its AI data may change depends on the specific characteristics of the media, for example: per frame, per GoP, per determined scene within a movie etc., or it may also be changed according to an arbitrary time period (e.g., every 30 seconds).
For the dynamically changing AI model and AI data as described above, it is necessary for the media streams and corresponding AI model/AI data streams to be time synchronization. At the receiving entity (UE), only when the two streams are synchronized will it be able to calculate what AI model and its related data should be used to process the media at a given time. Synchronization between media and AI model data streams is indicated by the <synchronized> parameter under the 3gpp_AImedia attribute under the media m-line as described in table 1. A mechanism of how the associated media and AI model streams can be synchronized is described in
Table 3 shows the SCTP packet structure, which consists of a common header, and multiple data chunks.
Table 4 shows the format of the payload data chunk, where a payload protocol identifier is used to identify the data present in the data chunk (registered to LANA in a first come first served manner).
A payload protocol identifier (or multiple identifiers) may be defined and specified to identify the different embodiments of synchronization data chunks for SCTP as defined subsequently in this invention. For example, a payload protocol identifier using a previously unassigned value or 74, as “3GPP AI4Media over SCTP”, defines one of the embodiments of the synchronization payload data chunk.
In one embodiment of this invention, an SCTP synchronization payload data chunk is defined as shown in Table 5. The syntax and semantics of timestamp, SSRC, and CSRC fields are defined as shown in Table 6.
Similar to that of RTP stream packets which contain media data, through the use of these fields in the SCTP packet, as well as the related timestamp fields in the sender report RTCP packet (notably NTP timestamp, RTP timestamp fields), which in this embodiment is considered to be exactly also relevant to the SCTP packets as it is relevant to the RTP packets, the SCTP stream carrying AI model and AI data can be synchronized to the associated media data RTP stream(s).
In another embodiment, an SCTP synchronization payload data chunk is defined as shown in Table 7. The syntax and semantics of SSRC of sender, NTP timestamp and RTP timestamp fields are defined as shown in Table 8 below.
By indicating exact values of NTP timestamp and RTP timestamp for which the SCTP data packet was sent, SCTP packets in the SCTP stream can be synchronized with the associated RTC media streams, by using the same NTP timestamp as indicated in the sender report RTCP packets.
The same values of NTP timestamp, RTP timestamp are used as in sender report RTCP packet.
In another embodiment, the SCTP synchronization payload data chunk may contain only the NTP timestamp which is matched to the sender report RTCP packets from the associated media data RTC streams.
Referring to
For example, the at least one result includes at least one of object recognition, increasing resolution of images or a language translation.
At step 1413, the UE 1401 may identify at least one AI model for outputting at least one result using the first media data from the list, based on the type of the first media data and the media service in which the at least one result is used.
At step 1415, the UE 1401 may transmit, to the MRF 1402, SDP response as a response to the SDP offer. For example, the SDP response may be for requesting the at least one AI model. As another example, the MRF 1402 may receive the SDP response for requesting the at least one AI model from the UE 1401.
At step 1417, the MRF 1402 may transmit at least one AI data and at least one media data including the first media data. For example, the at least one AI model requested by the UE is transmitted to the UE 1401. For example, the at least one AI data for the at least one AI model is transmitted to the UE 1401. For example, the at least one media data including the first media data and which is used for outputting the at least one result is transmitted to the UE 1401.
At step 1419, the UE 1401 may group the at least one AI data stream in which the at least one AI model and the at least one AI data are received, and the at least one media data stream in which the first media data is received.
At step 1421, the UE 1401 may synchronize the at least one AI data stream and the at least one media data stream. For example, the UE 1401 may synchronize the at least one AI data stream and the at least one media data stream based on information on timestamps.
At step 1423, the UE 1401 may process the first media data based on the at least one AI model. For example, the UE 1401 may output the at least one result (e.g., high resolution) by processing the first media data via the at least one AI model.
The term MRF 1402 may also be referred to as an entity for MRF or an MRF entity.
A method performed by a user equipment (UE) in a wireless communication system is provided. The method comprises receiving, from a media resource function (MRF) entity, a session description protocol (SDP) offer comprising a list of artificial intelligence (AI) models for outputting results used for media services, identifying at least one AI model, from the list, for outputting at least one result using first media data, based on a type of the first media data and a media service in which the at least one result is used, transmitting, to the MRF entity, an SDP response for requesting the at least one AI model as a response to the SDP offer and processing the first media data based on the at least one AI model received from the MRF entity.
The at least one AI model for outputting the at least one result is identified based on a UE capability for an AI model and network resources for the at least one AI model.
The SDP offer comprises information for grouping at least one AI data stream in which the at least one AI model is received and at least one media data stream in which the first media data is received and the at least AI data stream and the at least one media data stream are synchronized in time.
The processing of the first media data based on the at least one AI model comprises receiving, from the MRF entity, intermediate data that is based on the first media data and processing the intermediate data based on the at least one AI model received from the MRF entity, and the at least one result includes at least one of object recognition, increasing resolution, or language translation.
The method further comprises in case that the AI models of the SDP offer are not mapped with the type of the first media data and the media service, identifying a first AI model stored in the UE and mapped with the type of the first media data and the media service and processing the first media data based on the first AI model.
A user equipment (UE) in a wireless communication system is provided. The UE comprises a transceiver and a controller coupled with the transceiver and configured to receive, from a media resource function (MRF) entity, a session description protocol (SDP) offer comprising a list of artificial intelligence (AI) models for outputting results used for media services, identify at least one AI model, from the list, for outputting at least one result using first media data, based on a type of the first media data and a media service in which the at least one result is used, transmit, to the MRF entity, an SDP response for requesting the at least one AI model as a response to the SDP offer, and process the first media data based on the at least one AI model received from the MRF entity.
The at least one AI model for outputting the at least one result is identified based on a UE capability for an AI model and network resources for the at least one AI model.
The SDP offer comprises information for grouping at least one AI data stream in which the at least one AI model is received and at least one media data stream in which the first media data is received and the at least AI data stream and the at least one media data stream are synchronized in time.
The controller is further configured to receive, from the MRF entity, intermediate data that is based on the first media data, and process the intermediate data based on the at least one AI model received from the MRF entity, and the at least one result includes at least one of object recognition, increasing resolution, or language translation.
The controller is further configured to in case that the AI models included in the SDP offer are not mapped with the type of the first media data and the media service, identify a first AI model stored in the UE and mapped with the type of the first media data and the media service, and process the first media data based on the first AI model.
A method performed by a media resource function (MRF) entity in a wireless communication system is provided. The method comprises transmitting, to a user equipment (UE), a session description protocol (SDP) offer comprising a list of artificial intelligence (AI) models for outputting results used for media services and receiving, from the UE, an SDP response for requesting at least one AI model, from the list, for outputting at least one result using first media data, as a response to the SDP offer. The at least one AI model is based on a type of the first media data and a media service in which the at least one result is used.
The at least one AI model for outputting the at least one result is based on a UE capability for an AI model and network resources for the at least one AI model.
The SDP offer comprises information for grouping at least one AI data stream in which the at least one AI model is transmitted and at least one media data stream in which the first media data is transmitted and the at least AI data stream and the at least one media data stream are synchronized in time.
The method further comprises processing the first media data into intermediate data based on an AI model stored in the MRF entity, and mapped with the type of the first media data and the media service and transmitting, to the UE, the intermediate data in at least one media data stream.
The at least one result includes at least one of object recognition, increasing resolution, or language translation and wherein the first media data includes at least one of audio data or video data.
A media resource function (MRF) entity in a wireless communication system is provided. The MRF entity comprises a transceiver and a controller coupled with the transceiver and configured to transmit, to a user equipment (UE), a session description protocol (SDP) offer including a list of artificial intelligence (AI) models for outputting results used for media services, and receive, from the UE, an SDP response for requesting at least one AI model, from the list, for outputting at least one result using first media data, as a response to the SDP offer. The at least one AI model is based on a type of the first media data and a media service in which the at least one result is used.
The at least one AI model for outputting the at least one result is based on a UE capability for an AI model and network resources for the at least one AI model.
The SDP offer includes information for grouping at least one AI data stream in which the at least one AI model is transmitted and at least one media data stream in which the first media data is transmitted; and the at least AI data stream and the at least one media data stream are synchronized in time.
The controller is further configured to process the first media data into intermediate data based on an AI model stored in the MRF entity, and mapped with the type of the first media data and the media service, and transmit, to the UE, the intermediate data in at least one media data stream,
The at least one result includes at least one of object recognition, increasing resolution, or language translation, and the first media data comprises at least one of audio data or video data.
Referring to
The transceiver 1510 collectively refers to a base station receiver and a base station transmitter, and may transmit/receive a signal to/from a UE or a network entity. The signal transmitted or received to or from the terminal or a network entity may include control information and data. The transceiver 1510 may include a RF transmitter for up-converting and amplifying a frequency of a transmitted signal, and a RF receiver for amplifying low-noise and down-converting a frequency of a received signal. However, this is only an example of the transceiver 1510 and components of the transceiver 1510 are not limited to the RF transmitter and the RF receiver.
The transceiver 1510 may receive and output, to the processor 1530, a signal through a wireless channel, and transmit a signal output from the processor 1530 through the wireless channel.
The memory 1520 may store a program and data required for operations of the base station. The memory 1520 may store control information or data included in a signal obtained by the base station. The memory 1520 may be a storage medium, such as read-only memory (ROM), random access memory (RAM), a hard disk, a compact disc (CD)-ROM, and a digital versatile disc (DVD), or a combination of storage media.
The processor 1530 may control a series of processes such that the base station operates as described above. For example, the transceiver 1510 may receive a data signal including a control signal transmitted by the terminal, and the processor 1530 may determine a result of receiving the control signal and the data signal transmitted by the terminal.
Referring to
For example, the network entity 1600 of
The transceiver 1610 collectively refers to a network entity receiver and a network entity transmitter, and may transmit/receive a signal to/from a base station or a UE. The signal transmitted or received to or from the base station or the UE may include control information and data. In this regard, the transceiver 1610 may include a RF transmitter for up-converting and amplifying a frequency of a transmitted signal, and a RF receiver for amplifying low-noise and down-converting a frequency of a received signal. However, this is only an example of the transceiver 1610 and components of the transceiver 1610 are not limited to the RF transmitter and the RF receiver.
The transceiver 1610 may receive and output, to the processor 1630, a signal through a wireless channel, and transmit a signal output from the processor 1630 through the wireless channel.
The memory 1620 may store a program and data required for operations of the network entity. Also, the memory 1620 may store control information or data included in a signal obtained by the network entity. The memory 1620 may be a storage medium, such as ROM, RAM, a hard disk, a CD-ROM, and a DVD, or a combination of storage media.
The processor 1630 may control a series of processes such that the network entity operates as described above. For example, the transceiver 1610 may receive a data signal including a control signal, and the processor 1630 may determine a result of receiving the data signal.
Referring to
The UE 1700 of
The transceiver 1710 collectively refers to a UE receiver and a UE transmitter, and may transmit/receive a signal to/from a base station or a network entity, where the signal may include control information and data. The transceiver 1710 may include a RF transmitter for up-converting and amplifying a frequency of a transmitted signal, and a RF receiver for amplifying low-noise and down-converting a frequency of a received signal. However, this is only an example of the transceiver 1710 and components of the transceiver 1710 are not limited to the RF transmitter and the RF receiver.
The transceiver 1710 may receive and output, to the processor 1730, a signal through a wireless channel, and transmit a signal output from the processor 1730 through the wireless channel.
The memory 1720 may store a program and data required for operations of the UE. Also, the memory 1720 may store control information or data included in a signal obtained by the UE. The memory 1720 may be a storage medium, such as ROM, RAM, a hard disk, a CD-ROM, and a DVD, or a combination of storage media.
The processor 1730 may control a series of processes such that the UE operates as described above. For example, the transceiver 1710 may receive a data signal including a control signal transmitted by the base station or the network entity, and the processor 1730 may determine a result of receiving the control signal and the data signal transmitted by the base station or the network entity.
Various embodiments of the disclosure have been described above. The above description of the disclosure is merely for the sake of illustration, and embodiments of the disclosure are not limited to the embodiments set forth herein. Those skilled in the art will appreciate that the disclosure may be easily modified and changed into other specific forms without departing from the technical idea or essential features of the disclosure. Therefore, the scope of the disclosure should be determined not by the above detailed description but by the appended claims, and all modification sand changes derived from the meaning and scope of the claims and equivalents thereof shall be construed as falling within the scope of the disclosure.
Number | Date | Country | Kind |
---|---|---|---|
10-2022-0131360 | Oct 2022 | KR | national |