The present embodiments generally relate to distribution of AI/ML (Artificial Intelligence/Machine Learning) models.
The AI/ML techniques can be used in various domains, such as image enhancement, audio noise reduction, automatic translation, and navigation. This new intelligence can be accomplished by processing and interpreting precisely and quickly a tremendous amount of data generated by sensors embedded in the devices, e.g., camera, microphone, and thermometer. These sensors aim at reflecting what happens in the close vicinity of the device. Thus, environment change will impact the final application and the user experience.
According to an embodiment, a method performed by a wireless transmit/receive unit (WTRU) is presented, comprising: receiving information indicating a plurality of network communication paths that are available for downloading an AI/ML model, wherein said information further includes AI/ML model information: determining a plurality of AI/ML model chunks for said AI/ML model based on said received information: determining one network communication path, from said plurality of communication network paths, to download a respective model chunk of said plurality of AI/ML model chunks of said AI/ML model, based on said received information: establishing communication with said one network communication path to download said respective model chunk of said AI/ML model: building at least a subset of said AI/ML model based on said respective model chunk of said AI/ML model: and performing inference on said at least a subset of said AI/ML model.
According to another embodiment, a method performed by a server is presented, comprising: receiving model subscription information from a wireless transmit/receive unit (WTRU): selecting an AI/ML model for an event based on said model subscription information, said AI/ML model including a plurality of model chunks: generating information indicating a plurality of network communication paths that are available for downloading respective model chunks of said plurality of model chunks of said AI/ML model, based on said model subscription information: and transmitting said generated information to said WTRU.
Further embodiments include systems configured to perform the methods described herein. Such systems may include a processor and a non-transitory computer storage medium storing instructions that are operative, when executed on the processor, to perform the methods described herein.
As shown in
The communications systems 100 may also include a base station 114a and/or a base station 114b. Each of the base stations 114a, 114b may be any type of device configured to wirelessly interface with at least one of the WTRUs 102a, 102b, 102c, 102d to facilitate access to one or more communication networks, such as the CN 106, the Internet 110, and/or the other networks 112. By way of example, the base stations 114a, 114b may be a base transceiver station (BTS), a Node-B, an eNode B, a Home Node B, a Home eNode B, a gNB, a NR NodeB, a site controller, an access point (AP), a wireless router, and the like. While the base stations 114a, 114b are each depicted as a single element, it will be appreciated that the base stations 114a, 114b may include any number of interconnected base stations and/or network elements.
The base station 114a may be part of the RAN 104, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc. The base station 114a and/or the base station 114b may be configured to transmit and/or receive wireless signals on one or more carrier frequencies, which may be referred to as a cell (not shown). These frequencies may be in licensed spectrum, unlicensed spectrum, or a combination of licensed and unlicensed spectrum. A cell may provide coverage for a wireless service to a specific geographical area that may be relatively fixed or that may change over time. The cell may further be divided into cell sectors. For example, the cell associated with the base station 114a may be divided into three sectors. Thus, in one embodiment, the base station 114a may include three transceivers, i.e., one for each sector of the cell. In an embodiment, the base station 114a may employ multiple-input multiple output (MIMO) technology and may utilize multiple transceivers for each sector of the cell. For example, beamforming may be used to transmit and/or receive signals in desired spatial directions.
The base stations 114a, 114b may communicate with one or more of the WTRUs 102a, 102b, 102c, 102d over an air interface 116, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, centimeter wave, micrometer wave, infrared (IR), ultraviolet (UV), visible light, etc.). The air interface 116 may be established using any suitable radio access technology (RAT).
More specifically, as noted above, the communications system 100 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like. For example, the base station 114a in the RAN 104 and the WTRUs 102a, 102b, 102c may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface 116 using wideband CDMA (WCDMA). WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+). HSPA may include High-Speed Downlink (DL) Packet Access (HSDPA) and/or High-Speed UL Packet Access (HSUPA).
In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E-UTRA), which may establish the air interface 116 using Long Term Evolution (LTE) and/or LTE-Advanced (LTE-A) and/or LTE-Advanced Pro (LTE-A Pro).
In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as NR Radio Access, which may establish the air interface 116 using New Radio (NR).
In an embodiment, the base station 114a and the WTRUs 102a. 102b. 102c may implement multiple radio access technologies. For example, the base station 114a and the WTRUs 102a. 102b. 102c may implement LTE radio access and NR radio access together, for instance using dual connectivity (DC) principles. Thus, the air interface utilized by WTRUs 102a. 102b. 102c may be characterized by multiple types of radio access technologies and/or transmissions sent to/from multiple types of base stations (e.g., a eNB and a gNB).
In other embodiments, the base station 114a and the WTRUs 102a. 102b. 102c may implement radio technologies such as IEEE 802.11 (i.e., Wireless Fidelity (WiFi), IEEE 802.16 (i.e., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 1×, CDMA2000 EV-DO, Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856). Global System for Mobile communications (GSM). Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like.
The base station 114b in
The RAN 104 may be in communication with the CN 106, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VOIP) services to one or more of the WTRUs 102a. 102b. 102c. 102d. The data may have varying quality of service (QOS) requirements, such as differing throughput requirements, latency requirements, error tolerance requirements, reliability requirements, data throughput requirements, mobility requirements, and the like. The CN 106 may provide call control, billing services, mobile location-based services, pre-paid calling. Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication. Although not shown in
The CN 106 may also serve as a gateway for the WTRUs 102a, 102b, 102c, 102d to access the PSTN 108, the Internet 110, and/or the other networks 112. The PSTN 108 may include circuit-switched telephone networks that provide plain old telephone service (POTS). The Internet 110 may include a global system of interconnected computer networks and devices that use common communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and/or the internet protocol (IP) in the TCP/IP internet protocol suite. The networks 112 may include wired and/or wireless communications networks owned and/or operated by other service providers. For example, the networks 112 may include another CN connected to one or more RANs, which may employ the same RAT as the RAN 104 or a different RAT.
Some or all of the WTRUs 102a, 102b, 102c, 102d in the communications system 100 may include multi-mode capabilities (e.g., the WTRUs 102a, 102b, 102c, 102d may include multiple transceivers for communicating with different wireless networks over different wireless links). For example, the WTRU 102c shown in
The processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like. The processor 118 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment. The processor 118 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122. While
The transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station 114a) over the air interface 116. For example, in one embodiment, the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals. In an embodiment, the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example. In yet another embodiment, the transmit/receive element 122 may be configured to transmit and/or receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
Although the transmit/receive element 122 is depicted in
The transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 122 and to demodulate the signals that are received by the transmit/receive element 122. As noted above, the WTRU 102 may have multi-mode capabilities. Thus, the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as NR and IEEE 802.11, for example.
The processor 118 of the WTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit). The processor 118 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128. In addition, the processor 118 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 130 and/or the removable memory 132. The non-removable memory 130 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device. The removable memory 132 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like. In other embodiments, the processor 118 may access information from, and store data in, memory that is not physically located on the WTRU 102, such as on a server or a home computer (not shown).
The processor 118 may receive power from the power source 134, and may be configured to distribute and/or control the power to the other components in the WTRU 102. The power source 134 may be any suitable device for powering the WTRU 102. For example, the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li-ion), etc.), solar cells, fuel cells, and the like.
The processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102. In addition to, or in lieu of, the information from the GPS chipset 136, the WTRU 102 may receive location information over the air interface 116 from a base station (e.g., base stations 114a, 114b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable location-determination method while remaining consistent with an embodiment.
The processor 118 may further be coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity. For example, the peripherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs and/or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth R: module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, a Virtual Reality and/or Augmented Reality (VR/AR) device, an activity tracker, and the like. The peripherals 138 may include one or more sensors, the sensors may be one or more of a gyroscope, an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor: a geolocation sensor: an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor.
The WTRU 102 may include a full duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for both the UL (e.g., for transmission) and downlink (e.g., for reception) may be concurrent and/or simultaneous. The full duplex radio may include an interference management unit to reduce and or substantially eliminate self-interference via either hardware (e.g., a choke) or signal processing via a processor (e.g., a separate processor (not shown) or via processor 118). In an embodiment, the WRTU 102 may include a half-duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for either the UL (e.g., for transmission) or the downlink (e.g., for reception)).
Although the WTRU is described in
In view of
The emulation devices may be designed to implement one or more tests of other devices in a lab environment and/or in an operator network environment. For example, the one or more emulation devices may perform the one or more, or all, functions while being fully or partially implemented and/or deployed as part of a wired and/or wireless communication network in order to test other devices within the communication network. The one or more emulation devices may perform the one or more, or all, functions while being temporarily implemented/deployed as part of a wired and/or wireless communication network. The emulation device may be directly coupled to another device for purposes of testing and/or may performing testing using over-the-air wireless communications.
The one or more emulation devices may perform the one or more, including all, functions while not being implemented/deployed as part of a wired and/or wireless communication network. For example, the emulation devices may be utilized in a testing scenario in a testing laboratory and/or a non-deployed (e.g., testing) wired and/or wireless communication network in order to implement testing of one or more components. The one or more emulation devices may be test equipment. Direct RF coupling and/or wireless communications via RF circuitry (e.g., which may include one or more antennas) may be used by the emulation devices to transmit and/or receive data.
A deep neural network (DNN) is a complex function mapping some input domain to another domain, the output. A DNN is composed of several neural layers (typically in series) and each neural layer is composed of several perceptrons. A perceptron is a function that consists of a linear combination of the inputs and a non-linear function, for example a sigmoid function. Therefore, a DNN is composed of two elements: the architecture, which includes the number of perceptrons and the connections between them, and the parameters, which are the weights of the linear functions and, if required, the parameters of the non-linear functions. Trained by a machine learning (ML) algorithm on huge data sets, these models have recently proven useful for a wide range of applications and have led to significant improvements to the state-of-the-art in artificial intelligence (AI), computer vision, audio processing and several other domains. Due to their prevalence today, they are often referred to as an “AI/ML model”.
An AI/ML model consists basically in a set of weights that are learned during training for a specific architecture or configuration, where the architecture or configuration specifies what layers the model contain, and how they are connected.
It is expected that many of the applications installed on the mobile devices will rely on AI/ML algorithms. In order to maintain or even improve the user experience, the AI/ML model that the application relies on may be first downloaded or updated. This download or update process shall be fast and efficient and shall not hamper the application.
Mobile Network Operators and Content Providers face many challenges, for example:
Mobile applications in 4G/5G/6G context will rely more and more on AI/ML models that will be downloaded onto end user devices over wireless networks. A recent study on AI/ML Model Transfer in 5GS in 3GPP SAI (TR22.874) proposes use-cases and defines high level requirements on this subject. As an example, during a social event, e.g., a concert or a car race, thousands of people use an application on their smartphones that require AI/ML models. These AI/ML models are specific to the event area, specific to the social event itself, and may evolve during the social event itself with some environment changes like light or sounds. AI/ML models have to be downloaded first.
When the event starts, thousands of people launch the application, which triggers thousands of downloads of the same AI/ML models. As each AI/ML model is of several hundreds of Mbytes, this generates a tremendous downlink traffic, which is very difficult to handle by the base station, and the AI/ML model server, in a very short period of time. As the available spectrum resources are limited, this will lead to a limited throughput and possible congestion issues and delays, also as a consequence a poor QoE for the end user.
Table 1 illustrates potential AI/ML models available during the social event.
Using the first row of Table 1 as an example, Table 1 can be read as: when people, therefore UEs, enter the concert hall (referenced as the first “scene” scene_0), of the event, UEs can collect some specific AI/ML models dedicated to this concert hall acoustic and other characteristics (E00_A is a model specialized for the audio of the concert hall, E00_V is a model specialized for the video of the concert hall, E00_P is a model specialized to take picture in this concert hall).
If we consider an average model size of 64 KB, and if 5000 UEs ask for the same model (e.g., E00_A), and want to use it within a second, it requires a downlink bitrate of 64 KB*8*5000=2.56 Gb/s.
This application provides methods for a UE to collect a manifest (e.g., a description text file) of different network communication paths to download and further update a particular AI/ML model adapted to the target UE capabilities. The network paths are centralized and controlled by an application server for delivering the best overall network efficiency with respect to different kinds of UEs in place. The AI/ML model is supposed to be split into chunks. A chunk is sub-set of an AI/ML model and consists of a set of weights values. As a result, the AI/ML Application server/manifest server will publish a set of different manifests describing different network communication paths and related expected network limitations for downloading particular model chunks.
A network communication path may be a local link, for example, direct communication from a neighbor leading UE providing streaming capabilities to their vicinity, or a distant link, for example, communication between a UE and a remote AI/ML server located in the cloud, at the edge or in the Core Network. For either a local link or distant link, one-to-one unicast communication between a UE and an AI/ML server, or one-to-many communication can be used. The one-to-many communication may use broadcast, groupcast, multicast (e.g., multicast carousels streaming AI/ML model chunks according to various chunk frequencies). The communication pattern may be request/response or subscribe/notify. For regular update, a callback or notification is required. For the rest of the application, we use the terminology “multicast” to refer to the one-to-many communication, which may be multicast, groupcast, or broadcast.
To include direct network paths as a source for neighbor download, a so-called leading UE capable of redistributing model chunks to neighbor will register and further update its capabilities to the AI/ML server. In addition, it may consider the adjustment of the chunk frequency distribution of a multicast carousel considering the amount of requests, i.e., more requests would increase the chunk frequency.
Advantageously, this application discloses methods to maintain the quality of service and avoid network congestion caused by the simultaneous downloads of super-sized AI/ML models by a huge number of users. The proposed methods may save BSS (Base Station System) spectrum resources, reduce load on the AI/ML model server, reduce AI/ML model download time, and improve User Experience.
The “Event Area” is the area where the social event takes place, for example, a concert hall. This area is considered as static during the event. It is assumed that some specific AI/ML models have been developed and trained for this specific event area. During the event the environment may change, for example, at some moments the light environment or sound environment may change.
The “Direct Server Area” is the area where devices are close enough to make a Device to Device (D2D) exchange. This area is largely dynamic and evolves over time due to the movement of people during the event.
UE1, UE2, UE3, UE4, UE5 are under BSS coverage. UE7 is temporary out of BSS coverage. UE7 has been under coverage at a certain moment and has been able to collect the manifest.
In
UEs (UE1, UE2, UE3) run an AI/ML event application requesting an event manifest for downloading whole or part of an event model to run the AI/ML algorithm.
In the following, the procedure is described in detail.
The Manifest application server may also compute the UE1 neighbor map to find local AI/ML model servers of neighbor UEs that can be considered as an alternative direct source to UE1. The Manifest application server may add new D2D network resources in the manifest. For this step, The Manifest application server considers or finds local resources available for UE1 and the manifest files only comprise distant resources.
The Manifest application server may communicate with the core network to obtain additional UE information regarding the pending request.
At the end, the Manifest application server builds a specific manifest for UE1. The manifest for UE1 only exposes unicast and multicast resources since UE1 acts as UE source and has privileged access to legacy servers. Chunk information includes one unicast and one multicast server providing different chunks according to different expected download time.
The manifest for UE1 contains the list of model and model's chunks with their characteristics as illustrated in
In particular, UE2 may indicate the D2D network capability and specifically which kind of Neighbor composition the UE expects. By default, the manifest application server lists all the UE source and which chunks each different UE source serves. If UE2 indicates “neighbor list only”, the manifest application server will compute all the relevant neighbor lists received from different UE sources, then adds only UE sources in the vicinity or close enough to UE2.
Similar to step 6 but for UE2, the Manifest application server looks for all the network resources available from a distant communication or from local D2D communication with respect to UE2 neighbor map, for the different communication modes i.e. unicast, multicast, multicast carousel. Differently from step 6, the Manifest application server finds UEs in the vicinity.
The manifest for UE2 contains the list of model and model's chunks with their characteristics as illustrated in
When any UE receives the AI/ML Manifest, it computes and applies whole or part of the manifest. The UE may connect to one or several network communication paths to download the needed model chunks. It then feeds the model into memory and run the application.
In the above, a D2D communication may comprise several one-to-one and one-to-many communication, i.e., multicast also called groupcast or broadcast in 3GPP through a PC5 interface. For a one-to-many communication, the UEs are usually configured with a set of parameters (multicast address, group IDs and radio related parameters). In general, for 3GPP D2D over PC5, there is no explicit signaling protocol. Therefore, The source UE (e.g., UE1) finds the appropriate radio resource and sends the IP data to the IP multicast address Group ID as Destination Layer 2 ID (e.g., using the ProSe Layer 2). A receiving UE (e.g., UE2), configured with the group context, listens to the appropriate radio resource and filters out frames according to a Group ID (e.g., ProSe Layer 2) contained in the Destination Layer 2 ID.
Therefore, another embodiment includes a one-to-many D2D communication according to 3GPP, a manifest indicating a D2D multicast path may carry additional configuration parameters such us the group ID and the radio related parameters. An example of multicast D2D chunk can be:
In the above, the procedures for the UEs to download the manifest and the AI/ML model are described. It should be noted that the steps can be performed in the order as described above. However, the orders of the steps can also be adjusted. For example, step “4. Neighbor discovery” can be performed at any time and ideally when UEs enter the event area. In the case where step “4. Neighbor discovery” is performed later, the manifest file may not contain the D2D communication path and is therefore limited to the unicast and multicast communication paths. A UE may update its neighbor map to the manifest server at any time, for example, after receiving a new neighbor discovery that triggers a new neighbor map, when the UE detects a new event condition or when the UE requests a new model with a new profile to the server.
When the AI/ML model server receives event conditions update, it may compute a new fresh AI/ML model and a new fresh manifest. According to the subscription, the UE will be notified that a new model is available via a manifest update.
The procedure as described in
The Manifest AP server as described in
Manifest composition
In the following, we describe the information contained in the manifest.
Model usage type
The network chunk information shall help the application fit the chunk into the machine learning framework. The chunk information part comprises a set or a list of individual chunk information depending on the model chunk number defined above on general information.
Note: all these chunks may not arrive in the correct order i.e., “model entry”, “regular intermediate chunk” and finally “final chunk”. To avoid that issue, the specific parameter memory_loading_offset” is set as described below.
The description of the Manifest composition model may be inspired from Media Presentation Description (MPD) as described in to ISO/IEC 23009-1 “Dynamic adaptive streaming over HTTP (DASH)—Part 1: Media presentation description and segment formats” or in TS 26.247 “Transparent end-to-end Packet-switched Streaming Service (PSS): Progressive Download and Dynamic Adaptive Streaming over HTTP (3GP-DASH)”, both for unicast representation. For multicast representation including multicast carousel, it may be inspired from ETSI TS 103 769 “Digital Video Broadcasting (DVB): Adaptive media streaming over IP multicast”.
In particular, the Manifest composition above composed of Manifest information, Manifest version, Event identifier, Models list defines and describes parameters that may follow the same type of format and presentation according to the references above, but adapted to AI/ML. For example the terminology “Media” may be refined as “AI/ML model data”.
In the following, examples of network chunk information are provided.
Multicast Carousel download and memory loading offset parameter
The multicast carousel is managed by an entity, for example, the Mobile Network Operators (MNO), the Content Providers or a third party. They fit the carousel with all the chunks, and the way how the chunks are organized within the carousel may vary a lot and may depend on various strategies. Thus they can decide to present more frequently chunk m0 comparing to the other chunks. This is a logical decision since chunk m0 is the most important chunk, which is a sub-model and can be used as-is, and will deliver inference results until next chunks arrive and that the full model is reconstructed.
When the UE subscribes to the multicast carousel, the carousel starts transmitting the chunk at that time which is not necessarily chunk m0. Thus, in
For the sake of efficiency, the UE shall not drop chunk m2. It can fit in advance the AI/ML framework with chunk m2 using the following parameters:
In an example, the API code can be expressed as: load_model(Model name, Model size, Memory loading offset, chunk_data).
In
After m0, the next chunks are m4, m5, m6, again m0 (that will be dropped since already loaded) and m1. When m1 is loaded, the complete model is now operational and can deliver inference results with a higher score. Other chunk ordering strategies can be envisioned to make the download more efficient.
The original AI/ML model is split in chunks according to various methods, as illustrated in an example in
In the example of
Carousel creation and expected frequency availability parameter
In one embodiment, the model is first split in many chunks, the generated chunks then fit the carousel according to different strategies as illustrated in
The parameter “Expected_frequency_availability” in the manifest specifies the number of times a chunk appears in a carousel loop. Thus, we imagine that a carousel is defined by a duration, e.g., 3 minutes. In this time frame, chunk m0 can appear many times in proportion to the presence of the other chunks. If the parameter value of “Expected_frequency_availability.” is 0.50 (50%), it means that half of the time, the carousel transmits chunk m0, the rest is shared among chunks m1 to m7. The way the chunks are distributed in the carousel is not described here and depends on the mobile network operator or the content provider or the third party.
An AI framework provides APIs to perform some related AI tasks. At least two AI framework APIs can be used: load_model( ) and update_model( )
In the following, a manifest example is provided. In this example, the concert has started, the artistic director has planned a scene change, and a new manifest is transmitted to the UEs that have subscribed to this option. For this new scene, three new models are proposed for audio, video and picture. They are all available in the incremental format. Some models are available in unicast and/or in multicast and/or in D2D.
Manifest information: manifest dedicated to UEs present in the event area for the scene E00
Various numeric values are used in the present application. The specific values are provided for example purposes and the aspects described are not limited to these specific values.
Although features and elements are described above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element can be used alone or in any combination with the other features and elements. In addition, the methods described herein may be implemented in a computer program, software, or firmware incorporated in a computer readable medium for execution by a computer or processor. Examples of non-transitory computer-readable storage media include, but are not limited to, a read only memory (ROM), random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs). A processor in association with software may be used to implement a video encoder, a video decoder or both, a radio frequency transceiver for use in a UE, WTRU, terminal, base station, RNC, or any host computer.
Moreover, in the embodiments described above, processing platforms, computing systems, controllers, and other devices containing processors are noted. These devices may contain at least one Central Processing Unit (“CPU”) and memory. In accordance with the practices of persons skilled in the art of computer programming, reference to acts and symbolic representations of operations or instructions may be performed by the various CPUs and memories. Such acts and operations or instructions may be referred to as being “executed,” “computer executed” or “CPU executed”.
One of ordinary skill in the art will appreciate that the acts and symbolically represented operations or instructions include the manipulation of electrical signals by the CPU. An electrical system represents data bits that can cause a resulting transformation or reduction of the electrical signals and the maintenance of data bits at memory locations in a memory system to thereby reconfigure or otherwise alter the CPU's operation, as well as other processing of signals. The memory locations where data bits are maintained are physical locations that have particular electrical, magnetic, or optical properties corresponding to or representative of the data bits. It should be understood that the exemplary embodiments are not limited to the above-mentioned platforms or CPUs and that other platforms and CPUs may support the provided methods.
The data bits may also be maintained on a computer readable medium including magnetic disks, optical disks, and any other volatile (e.g., Random Access Memory (“RAM”)) or non-volatile (e.g., Read-Only Memory (“ROM”)) mass storage system readable by the CPU. The computer readable medium may include cooperating or interconnected computer readable medium, which exist exclusively on the processing system or are distributed among multiple interconnected processing systems that may be local or remote to the processing system. It is understood that the representative embodiments are not limited to the above-mentioned memories and that other platforms and memories may support the described methods.
In an illustrative embodiment, any of the operations, processes, etc. described herein may be implemented as computer-readable instructions stored on a computer-readable medium. The computer-readable instructions may be executed by a processor of a mobile unit, a network element, and/or any other computing device.
The use of hardware or software is generally (but not always, in that in certain contexts the choice between hardware and software may become significant) a design choice representing cost vs. efficiency tradeoffs. There may be various vehicles by which processes and/or systems and/or other technologies described herein may be effected (e.g., hardware, software, and/or firmware), and the preferred vehicle may vary with the context in which the processes and/or systems and/or other technologies are deployed. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware and/or firmware vehicle. If flexibility is paramount, the implementer may opt for a mainly software implementation. Alternatively, the implementer may opt for some combination of hardware, software, and/or firmware.
The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, flowcharts, and/or examples. Insofar as such block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those within the art that each function and/or operation within such block diagrams, flowcharts, or examples may be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof. Suitable processors include, by way of example, a GPU (Graphics Processing Unit), a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), and/or a state machine.
Although features and elements are provided above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element can be used alone or in any combination with the other features and elements. The present disclosure is not to be limited in terms of the particular embodiments described in this application, which are intended as illustrations of various aspects. Many modifications and variations may be made without departing from its spirit and scope, as will be apparent to those skilled in the art. No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly provided as such. Functionally equivalent methods and apparatuses within the scope of the disclosure, in addition to those enumerated herein, will be apparent to those skilled in the art from the foregoing descriptions. Such modifications and variations are intended to fall within the scope of the appended claims. The present disclosure is to be limited only by the terms of the appended claims, along with the full scope of equivalents to which such claims are entitled. It is to be understood that this disclosure is not limited to particular methods or systems.
It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting.
In certain representative embodiments, several portions of the subject matter described herein may be implemented via Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), and/or other integrated formats. However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, may be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and/or firmware would be well within the skill of one of skill in the art in light of this disclosure. In addition, those skilled in the art will appreciate that the mechanisms of the subject matter described herein may be distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies regardless of the particular type of signal bearing medium used to actually carry out the distribution. Examples of a signal bearing medium include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a CD, a DVD, a digital tape, a computer memory, etc., and a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).
The herein described subject matter sometimes illustrates different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely examples, and that in fact many other architectures may be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality may be achieved. Hence, any two components herein combined to achieve a particular functionality may be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermediate components. Likewise, any two components so associated may also be viewed as being “operably connected”, or “operably coupled”, to each other to achieve the desired functionality, and any two components capable of being so associated may also be viewed as being “operably couplable” to each other to achieve the desired functionality. Specific examples of operably couplable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components.
With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least.” the term “includes” should be interpreted as “includes but is not limited to,” etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, where only one item is intended, the term “single” or similar language may be used. As an aid to understanding, the following appended claims and/or the descriptions herein may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to embodiments containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should be interpreted to mean “at least one” or “one or more”). The same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, and C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to “at least one of A, B, or C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, or C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A. B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.” Further, the terms “any of” followed by a listing of a plurality of items and/or a plurality of categories of items, as used herein, are intended to include “any of,” “any combination of,” “any multiple of,” and/or “any combination of multiples of” the items and/or the categories of items, individually or in conjunction with other items and/or other categories of items. Moreover, as used herein, the term “set” or “group” is intended to include any number of items, including zero. Additionally, as used herein, the term “number” is intended to include any number, including zero.
In addition, where features or aspects of the disclosure are described in terms of Markush groups, those skilled in the art will recognize that the disclosure is also thereby described in terms of any individual member or subgroup of members of the Markush group.
As will be understood by one skilled in the art, for any and all purposes, such as in terms of providing a written description, all ranges disclosed herein also encompass any and all possible subranges and combinations of subranges thereof. Any listed range can be easily recognized as sufficiently describing and enabling the same range being broken down into at least equal halves, thirds, quarters, fifths, tenths, etc. As a non-limiting example, each range discussed herein may be readily broken down into a lower third, middle third and upper third, etc. As will also be understood by one skilled in the art all language such as “up to,” “at least,” “greater than,” “less than,” and the like includes the number recited and refers to ranges which can be subsequently broken down into subranges as discussed above. Finally, as will be understood by one skilled in the art, a range includes each individual member. Thus, for example, a group having 1-3 cells refers to groups having 1, 2, or 3 cells. Similarly, a group having 1-5 cells refers to groups having 1, 2, 3, 4, or 5 cells, and so forth.
Moreover, the claims should not be read as limited to the provided order or elements unless stated to that effect. In addition, use of the terms “means for” in any claim is intended to invoke 35 U.S.C. § 112, ¶6 or means-plus-function claim format, and any claim without the terms “means for” is not so intended.
It is contemplated that the systems may be implemented in software on microprocessors/general purpose computers (not shown). In certain embodiments, one or more of the functions of the various components may be implemented in software that controls a general-purpose computer.
In addition, although the invention is illustrated and described herein with reference to specific embodiments, the invention is not intended to be limited to the details shown. Rather, various modifications may be made in the details within the scope and range of equivalents of the claims and without departing from the invention.
Number | Date | Country | Kind |
---|---|---|---|
21305521.3 | Apr 2021 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2022/060212 | 4/19/2022 | WO |