This application relates to the field of communication technologies, and in particular, to a task processing method and an apparatus.
With development of communication systems, artificial intelligence (AI) will be increasingly applied in future communication systems.
Currently, to enable a related device to execute a specific task, the related device usually executes the task by receiving program code.
This application provides a task processing method and an apparatus, to effectively improve AI task management efficiency.
According to a first aspect, an embodiment of this application provides a task processing method. The method includes the following steps.
A first node determines a first task, where the first node includes a first core network device and/or a first access network device, and the first task is a task related to artificial intelligence (AI). The first node sends a first message, where the first message includes identification information of a second task and/or configuration information of the second task, the second task is a subtask of the first task, the second task is executed by a second node, and the second node is one of determined nodes configured to execute the first task.
In this embodiment of this application, the first core network device and/or the first access network device manages and controls an AI task. In this way, related management/control of the AI task is standardized, thereby effectively improving AI task management efficiency.
According to the method provided in this embodiment of this application, a related processing procedure of the AI task is standardized. Compared with a method for managing and controlling a task by using program code, a case in which program code needs to be updated in real time is effectively avoided, thereby effectively improving AI task management efficiency.
In at least one implementation, the first message further includes identification information of the first task.
In this embodiment of this application, the identification information of the first task is included, so that the second node can learn that the second task is obtained by decomposing the first task.
In at least one implementation, a type of the first task includes any one or more of the following: an inference task, a training task, a learning task, a computing task, or a sensing task.
In at least one implementation, that a first node determines a first task includes the following step: the first node determines any one or more of life cycle information of the first task, resource information of the first task, allocation information of the first task, and decomposition information of the first task.
In at least one implementation, the life cycle information of the first task includes any one or more of the following: addition of the first task, modification of the first task, querying of the first task, reporting of the first task, migration of an execution node of the first task, and migration of a management node of the first task; the resource information of the first task includes any one or more of the following: computing power of the first task, an algorithm of the first task, data of the first task, or a connection of the first task: the allocation information of the first task includes any one or more of the following: allocation of an execution node of the first task, allocation of an execution type of the execution node of the first task, and resource allocation of the execution node of the first task; and/or the decomposition information of the first task includes any one or more of the following: a quantity of tasks obtained by decomposing the first task and computing power decomposition of the first task.
In this embodiment of this application, the first core network device and/or the first access network device can more flexibly allocate a task by decomposing and/or allocating the first task, so that the second node can execute a task that matches a capability of the second node, thereby further improving AI task management efficiency, and further improving task execution efficiency of the second node.
In at least one implementation, a first node determines a first task including the following step: the first node determines at least two first tasks, where the at least two first tasks are executed by one or more second nodes.
That is, when receiving two or more tasks, the first node may alternatively not decompose the tasks, but directly allocate the tasks to corresponding second nodes.
In at least one implementation, the second task is the first task.
In at least one implementation, a first node determines a first task including the following step:
In at least one implementation, the method further includes any one or more of the following:
In at least one implementation, the second node includes one or more of the following: a second core network device, a second access network device, or a terminal device: the second access network device includes one or more of the following: a base station, a central unit (CU), a distributed unit (DU), a central unit-control plane (CU-CP), and a central unit-user plane (CU-UP); and the second core network device includes one or more of the following: an access and mobility management function (AMF), a session management function (SMF), a user plane function (UPF), a network exposure function (NEF), a network repository function (network repository function, NRF), a policy control function (PCF), unified data management (UDM), an application function (AF), and an authentication server function (AUSF).
In at least one implementation, a protocol layer for interaction between the first core network device and the terminal device is located above a non-access stratum (NAS), or is located in the NAS: a protocol layer for interaction between the first access network device and the terminal device is located above radio resource control (RRC) layer, or is located in the RRC layer; or a protocol layer for interaction between the first access network device and the terminal device is located above a service data adaptation protocol (SDAP) layer, or is located in the SDAP layer.
For example, when the protocol layer for interaction between the first core network device and the terminal device is located above the NAS, logic of various protocol layers is clear. When the protocol layer for interaction between the first core network device and the terminal device is located in the NAS, it can be ensured that a quantity of layers of the protocol stack remains unchanged, and therefore signaling overheads remain unchanged. Descriptions about other protocol layers are not described in detail herein.
In at least one implementation, the terminal device interacts with the DU by using the central unit CU, and a protocol layer for interaction between the terminal device and the CU is located above RRC layer, or a protocol layer for interaction between the terminal device and the CU is located in the RRC layer: or a protocol layer for interaction between the terminal device and the DU is located in a physical (PHY) layer and a media access control (MAC) layer.
In at least one implementation, a protocol layer for interaction between the CU and the DU is located above an F1 interface application protocol (F1AP), is located in the F1AP, or is located above representational state transfer (RST): a protocol layer for interaction between the CU-CP and the CU-UP is located above an E1 interface application protocol (E1AP) layer, is located in the E1AP layer, or is located above representational state transfer (RST): a protocol layer for interaction between the first core network device and the first access network device is located above an Ng interface application protocol (NgAP), is located in the NgAP, is located above a general packet radio system (GPRS) tunneling protocol-user plane (GTUP), is located in the GTP-U layer, or is located above representational state transfer RST: a protocol layer for interaction between the first access network device and the second access network device is located above an Xn interface application protocol (XnAP), is located in the XnAP, or is located above representational state transfer RST: or a protocol layer for interaction between the first core network device and the second core network device is located above representational state transfer RST.
It may be understood that the RST shown in this embodiment of this application may be understood as RST that is based on a service-based architecture (service-based architecture, SBA).
In at least one implementation, the first node sends a first message including the following step:
the first node sends the first message to the second node, where the second node is one of nodes that are determined by the first node and that are configured to execute the first task; and the first node includes the first core network device, and the second node includes the second core network device or the terminal device, or the first node includes the first access network device, and the second node includes the second access network device and the terminal device. Alternatively, the first node sends the first message to the second node by using a forwarding node, where the second node is one of nodes that are determined by the forwarding node and that are configured to execute the first task, the first node includes the first core network device, the forwarding node includes the first access network device, and the second node includes the second access network device and/or the terminal device.
In this embodiment of this application, the forwarding node may be configured to implement a step performed by the first node, and may also be configured to implement a step performed by the second node. For another example, the forwarding node may have both a task execution function and a task managing and controlling function. For another example, the forwarding node may be configured to forward a message, or may be configured to perform secondary decomposition and/or allocation on a task, or the like.
According to a second aspect, an embodiment of this application provides a task processing method. The method includes the following steps.
A second node receives a first message from a first node, where the first message includes identification information of a second task and/or configuration information of the second task, the second task is a subtask of the first task, the first task is a task related to artificial intelligence AI, and the first node includes a first core network device and/or a first access network device; and the second node executes the second task.
In at least one implementation, a second node receives a first message including the following step.
the second node receives the first message sent by the first node, where the second node is one of nodes that are determined by the first node and that are configured to execute the first task; and the first node includes the first core network device, and the second node includes a second core network device or a terminal device, or the first node includes the first access network device, and the second node includes a second access network device and a terminal device. Alternatively, the second node receives, by using a forwarding node, the first message sent by the first node, where the second node is one of nodes that are determined by the forwarding node and that are configured to execute the first task, the first node includes the first core network device, the forwarding node includes the first access network device, and the second node includes a second access network device and/or a terminal device.
In at least one implementation, the method further includes any one or more of the following:
In at least one implementation, the second node includes one or more of the following: the second core network device, the second access network device, or the terminal device: the second access network device includes one or more of the following: a base station, a central unit CU, a distributed unit DU, a central unit-control plane CU-CP, and a central unit-user plane CU-UP; and the second core network device includes one or more of the following: an access and mobility management AMF, a session management function SMF, a user plane function UPF, a network exposure function NEF, a network repository function NRF, a policy control function PCF, unified data management UDM, an application function AF, and an authentication server function AUSF.
In at least one implementation, a protocol layer for interaction between the first core network device and the terminal device is located above a non-access stratum NAS or is located in the NAS: a protocol layer for interaction between the first access network device and the terminal device is located above radio resource control RRC layer, or is located in the RRC layer; or a protocol layer for interaction between the first access network device and the terminal device is located above a service data adaptation protocol SDAP layer, or is located in the SDAP layer.
In at least one implementation, the terminal device interacts with the DU by using the central unit CU, and a protocol layer for interaction between the terminal device and the CU is located above RRC layer, or a protocol layer for interaction between the terminal device and the CU is located in the RRC layer: or a protocol layer for interaction between the terminal device and the DU is located in a physical PHY layer and a media access control MAC layer.
In at least one implementation, a protocol layer for interaction between the CU and the DU is located above an F1 interface application protocol F1AP layer, is located in the F1AP layer, or is located above representational state transfer RST: a protocol layer for interaction between the CU-CP and the CU-UP is located above an E1 interface application protocol E1AP layer, is located above the E1AP layer, or is located above representational state transfer RST: a protocol layer for interaction between the first core network device and the first access network device is located above an Ng interface application protocol NgAP layer, is located in the NgAP layer, is located above a general packet radio system GPRS tunneling protocol-user plane GTP-U layer, is located in the GTP-U layer, or is located above presentational state transfer RST layer: a protocol layer for interaction between the first access network device and the second access network device is located above an Xn interface application protocol XnAP, or is located above presentation-state transfer RST: or a protocol layer for interaction between the first core network device and the second core network device is located above representational state transfer RST.
It may be understood that, for related descriptions of the second aspect, refer to the method embodiment shown in the first aspect or the following.
According to a third aspect, an embodiment of this application provides a communication apparatus. The apparatus includes units configured to perform the method in any one of the first aspect or the possible implementations of the first aspect.
According to a fourth aspect, an embodiment of this application provides a communication apparatus. The apparatus includes units configured to perform the method in any one of the second aspect or the possible implementations of the second aspect.
According to a fifth aspect, an embodiment of this application provides a communication apparatus. The apparatus includes:
It may be understood that, that the transceiver unit shown in this embodiment of this application is configured to send a first message may be further understood as the following: The processing unit is configured to control the transceiver unit to send the first message, or the processing unit is configured to send the first message by using the transceiver unit. Descriptions of the processing unit and the transceiver unit are also applicable below.
In at least one implementation, the first message further includes identification information of the first task.
In at least one implementation, a type of the first task includes any one or more of the following: an inference task, a training task, a learning task, a computing task, or a sensing task.
In at least one implementation, the processing unit is specifically configured to determine any one or more of life cycle information of the first task, resource information of the first task, allocation information of the first task, or decomposition information of the first task.
In at least one implementation, the life cycle information of the first task includes any one or more of the following: addition of the first task, modification of the first task, querying of the first task, reporting of the first task, migration of an execution node of the first task, and migration of a management node of the first task: the resource information of the first task includes any one or more of the following: computing power of the first task, an algorithm of the first task, data of the first task, or a connection of the first task: the allocation information of the first task includes any one or more of the following: allocation of an execution node of the first task, allocation of an execution type of the execution node of the first task, and resource allocation of the execution node of the first task; and/or the decomposition information of the first task includes any one or more of the following: a quantity of tasks obtained by decomposing the first task and computing power decomposition of the first task.
In at least one implementation, the processing unit is specifically configured to determine at least two first tasks, where the at least two first tasks are executed by one or more second nodes.
In at least one implementation, the second task is the first task.
In at least one implementation, the processing unit is specifically configured to determine at least two second tasks of the first task, where the at least two second tasks correspond to at least two second nodes, and execution types of the at least two second nodes are coordinated execution, or execution types of the at least two second nodes are single-point execution.
In at least one implementation, the transceiver unit is further configured to receive capability information from the second node, where the capability information indicates a capability of the second node to execute a task: the transceiver unit is further configured to send a resource configuration request to the second node, where the resource configuration request is used to request to perform resource configuration on the second task: the transceiver unit is further configured to receive a second message, where the second message indicates an execution result of the second task: or the transceiver unit is further configured to receive at least two second messages, and combine execution results of the first task based on the at least two second messages, where the second message indicates an execution result of the second task.
In this embodiment of this application, that the transceiver unit is configured to receive capability information may also be understood as the following: The processing unit controls the transceiver unit to receive the capability information, or the processing unit receives the capability information by using the transceiver unit. Descriptions of the processing unit and the transceiver unit are also applicable below.
In at least one implementation, the second node includes one or more of the following: a second core network device, a second access network device, or a terminal device: the second access network device includes one or more of the following: a base station, a central unit (CU), a distributed unit (DU), a central unit-control plane (CU-CP), and a central unit-user plane (CU-UP); and the second core network device includes one or more of the following: an access and mobility management function (AMF), a session management function (SMF), a user plane function (UPF), a network exposure function (NEF), a network repository function (NRF), a policy control function (PCF), unified data management (UDM), an application function (AF), and an authentication server function (AUSF).
In at least one implementation, a protocol layer for interaction between the first core network device and the terminal device is located above a non-access stratum (NAS), or is located in the NAS: a protocol layer for interaction between the first access network device and the terminal device is located above radio resource control (RRC) layer, or is located in the RRC layer; or a protocol layer for interaction between the first access network device and the terminal device is located above a service data adaptation protocol (SDAP) layer, or is located in the SDAP layer.
In at least one implementation, the terminal device interacts with the DU by using the central unit CU, and a protocol layer for interaction between the terminal device and the CU is located above RRC layer, or a protocol layer for interaction between the terminal device and the CU is located in the RRC layer: or a protocol layer for interaction between the terminal device and the DU is located in a physical (PHY) layer and a media access control (MAC) layer.
In at least one implementation, a protocol layer for interaction between the CU and the DU is located above an F1 interface application protocol (F1AP), is located in the F1AP, or is located above representational state transfer (RST): a protocol layer for interaction between the CU-CP and the CU-UP is located above an E1 interface application protocol (E1 application protocol, E1AP) layer, is located in the E1AP layer, or is located above representational state transfer RST: a protocol layer for interaction between the first core network device and the first access network device is located above an Ng interface application protocol NgAP, is located in the NgAP, is located above a general packet radio system (GPRS) tunneling protocol-user plane (GTP-U) layer, is located in the GTP-U layer, or is located above representational state transfer RST: or a protocol layer for interaction between the first access network device and the second access network device is located above an Xn interface application protocol (XnAP), is located in the XnAP, or is located above representational state transfer RST: or a protocol layer for interaction between the first core network device and the second core network device is located above representational state transfer RST.
In at least one implementation, the transceiver unit is specifically configured to send the first message to the second node, where the second node is one of nodes that are determined by the communication apparatus and that are configured to execute the first task; and the communication apparatus includes the first core network device, and the second node includes the second core network device or the terminal device, or the communication apparatus includes the first access network device, and the second node includes the second access network device and the terminal device. Alternatively, the transceiver unit is further configured to send the first message to the second node by using a forwarding node, where the second node is one of nodes that are determined by the forwarding node and that are configured to execute the first task, the communication apparatus includes the first core network device, the forwarding node includes the first access network device, and the second node includes the second access network device and/or the terminal device.
For a specific description of the communication apparatus in the fifth aspect, refer to the apparatus embodiment shown in the first aspect or the following.
According to a sixth aspect, an embodiment of this application provides a communication apparatus. The apparatus includes:
In at least one implementation, the transceiver unit is specifically configured to receive the first message sent by the first node, where the communication apparatus is one of nodes that are determined by the first node and that are configured to execute the first task; and the first node includes the first core network device, and the communication apparatus includes a second core network device or a terminal device, or the first node includes the first access network device, and the communication apparatus includes a second access network device and a terminal device. Alternatively, the transceiver unit is specifically configured to receive, by using a forwarding node, the first message sent by the first node, where the communication apparatus is one of nodes that are determined by the forwarding node and that are configured to execute the first task, the first node includes the first core network device, the forwarding node includes the first access network device, and the communication apparatus includes a second access network device and/or a terminal device.
In at least one implementation, the transceiver unit is specifically configured to send capability information, where the capability information indicates a capability of the communication apparatus to execute a task: the transceiver unit is specifically configured to receive a resource configuration request, where the resource configuration request is used to request to perform resource configuration on the second task: or the transceiver unit is specifically configured to send a second message, where the second message indicates an execution result of the second task.
In at least one implementation, the communication apparatus includes one or more of the following: the second core network device, the second access network device, or the terminal device: the second access network device includes one or more of the following: a base station, a central unit CU, a distributed unit DU, a central unit-control plane CU-CP, and a central unit-user plane CU-UP; and the second core network device includes one or more of the following: an access and mobility management AMF, a session management function SMF, a user plane function UPF, a network exposure function NEF, a network repository function NRF, a policy control function PCF, unified data management UDM, an application function AF, and an authentication server function AUSF.
In at least one implementation, a protocol layer for interaction between the first core network device and the terminal device is located above a non-access stratum NAS or is located in the NAS: a protocol layer for interaction between the first access network device and the terminal device is located above radio resource control RRC layer, or is located in the RRC layer; or a protocol layer for interaction between the first access network device and the terminal device is located above a service data adaptation protocol SDAP layer, or is located in the SDAP layer.
In at least one implementation, the terminal device interacts with the DU by using the central unit CU, and a protocol layer for interaction between the terminal device and the CU is located above RRC layer, or a protocol layer for interaction between the terminal device and the CU is located in the RRC layer: or a protocol layer for interaction between the terminal device and the DU is located in a physical PHY layer and a media access control MAC layer.
In at least one implementation, a protocol layer for interaction between the CU and the DU is located above an F1 interface application protocol F1AP, is located in the F1AP, or is located above representational state transfer RST: a protocol layer for interaction between the CU-CP and the CU-UP is located above an E1 interface application protocol E1AP layer, is located above the E1AP layer, or is located above representational state transfer RST layer: a protocol layer for interaction between the first core network device and the first access network device is located above an Ng interface application protocol NgAP layer, is located in the NgAP layer, is located above a general packet radio system GPRS tunneling protocol-user plane GTP-U layer, is located in the GTP-U layer, or is located above presentational state transfer RST: a protocol layer for interaction between the first access network device and the second access network device is located above an Xn interface application protocol XnAP, or is located above presentation-state transfer RST: or a protocol layer for interaction between the first core network device and the second core network device is located above representational state transfer RST.
It may be understood that for specific descriptions of the sixth aspect, refer to the apparatus embodiment shown in the first aspect, the second aspect, or the following.
According to a seventh aspect, an embodiment of this application provides a communication apparatus. The communication apparatus includes a processor, configured to perform the method in any one of the first aspect or the possible implementations of the first aspect. Alternatively, the processor is configured to execute a program stored in a memory. When the program is executed, the method in any one of the first aspect or the possible implementations of the first aspect is performed.
In at least one implementation, the memory is located outside the communication apparatus.
In at least one implementation, the memory is located inside the communication apparatus.
In this embodiment of this application, the processor and the memory may alternatively be integrated into one device. In other words, the processor and the memory may alternatively be integrated together.
In at least one implementation, the communication apparatus further includes a transceiver. The transceiver is configured to receive a signal or send a signal. For example, the transceiver may be configured to send a first message and the like.
According to an eighth aspect, an embodiment of this application provides a communication apparatus. The communication apparatus includes a processor, configured to perform the method in any one of the second aspect or the possible implementations of the second aspect. Alternatively, the processor is configured to execute a program stored in a memory. When the program is executed, the method in any one of the second aspect or the possible implementations of the second aspect is performed.
In at least one implementation, the memory is located outside the communication apparatus.
In at least one implementation, the memory is located inside the communication apparatus.
In this embodiment of this application, the processor and the memory may alternatively be integrated into one device. In other words, the processor and the memory may alternatively be integrated together.
In at least one implementation, the communication apparatus further includes a transceiver. The transceiver is configured to receive a signal or send a signal. For example, the transceiver may be configured to receive a first message and the like.
According to a ninth aspect, an embodiment of this application provides a communication apparatus. The communication apparatus includes a logic circuit and an interface, the logic circuit is coupled to the interface, the logic circuit is configured to determine a first task, and the interface is configured to output a first message.
For example, the interface is configured to input a second message and the like.
It may be understood that for descriptions of the first task, the first message, and the like, refer to the descriptions of the first aspect or the second aspect, or refer to the following embodiments. Details are not described herein.
According to a tenth aspect, an embodiment of this application provides a communication apparatus. The communication apparatus includes a logic circuit and an interface, the logic circuit is coupled to the interface, the interface is configured to input a first message, and the logic circuit is configured to execute a second task.
It may be understood that for descriptions of the first message, the first task, the second task, and the like, refer to the descriptions of the first aspect or the second aspect, or refer to the following embodiments. Details are not described herein.
According to an eleventh aspect, an embodiment of this application provides a computer-readable storage medium. The computer-readable storage medium is configured to store a computer program. When the computer program runs on a computer, the method according to any one of the first aspect or the possible implementations of the first aspect is enabled to be performed.
According to a twelfth aspect, an embodiment of this application provides a computer-readable storage medium. The computer-readable storage medium is configured to store a computer program. When the computer program runs on a computer, the method according to any one of the second aspect or the possible implementations of the second aspect is enabled to be performed.
According to a thirteenth aspect, an embodiment of this application provides a computer program product. The computer program product includes a computer program or computer code. When the computer program product or the computer code runs/is run on a computer, the method according to any one of the first aspect or the possible implementations of the first aspect is enabled to be performed.
According to a fourteenth aspect, an embodiment of this application provides a computer program product. The computer program product includes a computer program or computer code. When the computer program product or the computer code runs/is run on a computer, the method according to any one of the second aspect or the possible implementations of the second aspect is enabled to be performed.
According to a fifteenth aspect, an embodiment of this application provides a computer program. When the computer program runs on a computer, the method according to any one of the first aspect or the possible implementations of the first aspect is performed.
According to a sixteenth aspect, an embodiment of this application provides a computer program. When the computer program runs on a computer, the method according to any one of the second aspect or the possible implementations of the second aspect is performed.
According to a seventeenth aspect, an embodiment of this application provides a wireless communication system. The wireless communication system includes a first node and a second node. The first node is configured to perform the method shown in any one of the first aspect or the possible implementations of the first aspect. The second node is configured to perform the method shown in any one of the second aspect or the possible implementations of the second aspect.
To make the objectives, technical solutions, and advantages of this application clearer, this application is further described with reference to the accompanying drawings.
Terms “first”, “second”, and the like in the specification, claims, and accompanying drawings of this application are merely used to distinguish between different objects, and are not used to describe a specific order. In addition, terms such as “include” and “have” and any other variants thereof are intended to cover a non-exclusive inclusion. For example, processes, methods, systems, products, devices, or the like that include a series of steps or units are not limited to listed steps or units, but instead, optionally further include steps or units that are not listed, or optionally further include other steps or units inherent to these processes, methods, products, or devices.
“Embodiments” mentioned herein mean that specific features, structures, or characteristics described in combination with the embodiments may be included in at least one embodiment of this application. The phrase shown in various locations in the specification may not necessarily refer to a same embodiment, and is not an independent or optional embodiment exclusive from another embodiment. It may be understood explicitly and implicitly by a person skilled in the art that the embodiments described herein may be combined with other embodiments.
In this application, “at least one (item)” means one or more, “a plurality of” means two or more, “at least two (items)” means two or three or more, and “and/or” is used to describe an association relationship between associated objects, which indicates that three relationships may exist. For example, “A and/or B” may indicate: only A exists, only B exists, and both A and B exist. A and B may be singular or plural. The character “/” generally indicates an “or” relationship between the associated objects. “At least one of the following” or a similar expression thereof means any combination of these items. For example, at least one of a, b, or c may represent: a, b, c, “a and b”, “a and c”, “b and c”, or “a and b and c”.
A method provided in this application may be applied to various communication systems, for example, an internet of things (IoT) system, a narrowband internet of things (NB-IoT) system, a long term evolution (LTE) system, a 5th-generation (5G) communication system, and a new communication system (for example, 6G) emerging in future communication development. In addition, the method provided in this application may be further applied to a wireless local area network (WLAN) system, for example, wireless fidelity (Wi-Fi) and the like.
The technical solutions provided in this application may be further applied to machine type communication (MTC), a long-term evolution machine type communication (LTE-M) technology, a device-to-device (D2D) network, a machine-to-machine (M2M) network, an internet of things (IoT) network, an industrial internet, or another network. The IoT network may include, for example, an internet of vehicles. Communication manners in an internet of vehicles system are collectively referred to as vehicle-to-everything (V2X, where X may represent any thing). For example, the V2X may include vehicle-to-vehicle (V2V) communication, vehicle-to-infrastructure (V2I) communication, vehicle-to-pedestrian (V2P) communication, or vehicle-to-network (V2N) communication. For example, in
For example, the following describes a network architecture in this application by using a 5G communication system as an example.
For example, the data network (DN) may also be referred to as a packet data network (PDN), and is usually a network located outside an operator network, for example, a third-party network. For example, the operator network may access a plurality of data networks, and a plurality of services may be deployed in the data network, to provide a data service and/or a voice service, and the like for a terminal device. A specific representation form of the third-party network may be specifically determined based on an actual application scenario. This is not limited in this application.
For example, the terminal device part includes a terminal device, and the terminal device may also be referred to as user equipment (user equipment, UE), a terminal, a user terminal, or the like. The terminal device is a device that has a wireless transceiver function. For example, the terminal device may communicate with one or more core network (CN) devices (or may be referred to as a core device) through an access network device (or may be referred to as an access device) in a radio access network (RAN). In a possible implementation, the terminal device may be deployed on land, including indoors or outdoors, or in a handheld manner, a wearable manner, or an in-vehicle manner, or may be deployed on a water surface, for example, on a ship. In another possible implementation, the terminal device may be a handheld device, an in-vehicle device, a wearable device, a sensor, a terminal in the internet of things, a terminal in the internet of vehicles, an uncrewed aerial vehicle, a terminal device in any form in a 5th-generation (5G) network or a future network, or the like that has a wireless communication function. This is not limited in this application.
It may be understood that the terminal device shown in this application may not only include a vehicle (for example, an entire vehicle) in the internet of vehicles, but also include an in-vehicle device or an in-vehicle terminal (including a telematics box (telematics box, T-box) or a host in an internet of vehicles system) in the internet of vehicles. A specific form of the terminal device applied to the internet of vehicles is not limited in this application. For ease of description, the following describes the method in this application by using an example in which the terminal device is UE.
For example, the operator network part may include an access and mobility management function (AMF), a session management function (SMF), a user plane function (UPF), a radio access network (RAN), and the like. In the foregoing operator network, other parts than the radio access network part may also be referred to as a core network (CN) part or a core network part. The access network device may be deployed in the radio access network, to provide a wireless communication service for the terminal device. The access network device may also be referred to as an access device, a RAN device, or the like. For example, the core network may further include a network exposure function (NEF), a network repository function (NRF), a policy control function (PCF), unified data management (UDM), an application function (AF), an authentication server function (AUSF), and the like. It may be understood that, for other network elements in the core network, details are not described in embodiments of this application. It may be understood that, with evolution of standard technologies, the AMF, the SMF, or the UPF shown in this application may have other evolutions. For example, the AMF may be evolved into a case in which an access function and a mobility management function are separately deployed, or the AMF, the SMF, and the UPF may have other names. For this, the AMF, the SMF, and the UPF shown below should not be understood as a limitation on embodiments of this application.
For example, the access network device may be a next-generation NodeB (gNB), a next-generation evolved NodeB (ng-eNB), an access network device in future 6G communication, or the like. The access network device may be any device that has a wireless transceiver function, and includes but is not limited to the foregoing base stations (including a base station deployed on a satellite). The access network device may alternatively be an apparatus that has a base station function in a sixth generation communication system. Optionally, the access network device may be an access node, a wireless relay node, a wireless backhaul node, or the like in a wireless local area network (WLAN) system. Optionally, the access network device may be a radio controller in a cloud radio access network (CRAN) scenario. Optionally, the access network device may be a wearable device, an in-vehicle device, or the like. Optionally, the access network device may alternatively be a small cell, a transmission and receiving node (TRP) (or may be referred to as a transmission point), or the like. It may be understood that the access network device may alternatively be a base station, a satellite, or the like in a future evolved public land mobile network (PLMN). The access network device may alternatively be a device or the like that bears a base station function in a non-terrestrial communication system, D2D, V2X, or M2M. A specific type of the access network device is not limited in this application. In systems of different radio access technologies, names of devices that have an access network device function may be different.
In some deployments, as shown in
In some other deployments, the CU may alternatively be divided into a CU-CP and a CU-UP. The CU-CP is responsible for a control plane function, and mainly includes RRC and PDCP (for example, may also be referred to as PDCP-C) corresponding to the control plane. The PDCP-C is mainly responsible for data encryption and decryption, integrity protection, data transmission, and the like on the control plane. The CU-UP is responsible for a user plane function, and mainly includes SDAP and PDCP (for example, may also be referred to as PDCP-U) corresponding to the user plane. Different deployment manners of the base station shown in this application are merely examples. During specific implementation, the base station may alternatively be deployed in another manner. Therefore, a deployment manner of the base station shown below should not be understood as a limitation on embodiments of this application. With evolution of standard technologies, this application is also applicable when a base station has different deployment manners. The gNB shown in this embodiment of this application may alternatively be replaced with xNB, where x represents different representations of base stations in different standard networks. For example, the xNB is an eNB in a 4G network, and the xNB is a gNB in a 5G network.
It may be understood that, for ease of distinguishing between different deployment manners of the base station, the base station described below may be understood as a device, for example, a gNB or an eNB, that has a base station function and whose function is not split. Therefore, the base station, the CU, the DU, and the like shown below may be understood as different devices.
For example, in some deployments, when the gNB is divided into a CU and a DU, the CU may represent that the gNB is connected to a core network through an Ng interface, and the CU represents that the gNB is connected to another gNB through an Xn interface. It may be understood that the foregoing types of base stations, interfaces, or protocol stacks are merely examples. During specific implementation, there may be another type of base station, another interface, another protocol stack, or the like. This is not limited in this application.
It should be noted that a protocol stack for interaction between two network elements shown in this application is shown based on a fifth generation communication system. With evolution of the communication system, the protocol stack for interaction between the network elements may change, and/or the network elements shown in this application may also change (for example, function splitting and/or combination are performed). Therefore, all protocol stacks or interfaces that have functions similar to those shown in
It may be understood that the method provided in this application is applicable to both a homogeneous network scenario and a heterogeneous network scenario. In addition, there is no limitation on a transmission point, for example, transmission may be collaborative multipoint transmission between macro base stations, between micro base stations, and between a macro base station and a micro base station. For another example, the method provided in this application is applicable to both a frequency division duplexing (FDD) system and/or a time division duplexing (TDD) system. For another example, the method provided in this application is applicable to a low-frequency scenario (for example, lower than 6G Hz, or referred to as sub 6G), or is applicable to a high-frequency scenario (for example, higher than 6G Hz), a terahertz scenario, an optical communication scenario, or the like.
A network data analytics function (NWDAF) in a 5G network may be used to support artificial intelligence (AI), mainly collects and analyzes data at an application layer, and provides a service and interface invoking for an external system. In addition, an operation and maintenance (OAM) network element in the 5G network also has a similar management data analytics function (MDAF); and can support data collection and analysis of management data, and provide a service and interface invoking for an external system. However, in the foregoing AI-related research project, an AI-related procedure and functions are not standardized. This application provides a task processing method and a related product, so that AI task management efficiency can be effectively improved by standardizing an AI task.
Implementation 1: The first node includes a first core network device.
The first node is deployed in the first core network device, the first node is included in the first core network device, or the first node includes the first core network device, or the like. A description of a relationship between the first core network device and the first node is not limited in this embodiment of this application.
It is assumed that both a network element (which may also be briefly referred to as a CN network element) in a core network and a network element (which may also be briefly referred to as a RAN network element) in an access network may be used as a second node. In this case, both the network element in the core network and the network element in the access network may be centrally managed and/or controlled by the first core network device. For example, the first core network device may be a core network device in the operator network part shown in
For example, the first core network device may be one of an AMF, an SMF, or a UPF. For another example, the first core network device may alternatively be an added core network element, for example, a TMF. For another example, the first core network device may be an AMF that has a specific type of feature. For example, the first core network device may be a network element that has an access and mobility management function in a 6G network. For another example, the first core network device may be a network element that has an access and mobility management function and that can process an AI task. It may be understood that, for descriptions of the UPF, the SMF, or the TMF, refer to descriptions of the AMF. Details are not described herein. For another example, the first core network device may be a specific AMF, a specific UPF, a specific SMF, a specific TMF, or the like. It should be noted that the first core network device may further include one or more of an NEF, an NRF, a PCF, UDM, an AF and an AUSF. Details are not described herein.
It may be understood that the first core network device may also be referred to as a control core network (C-CN) or a core network task anchor (CN TA). For ease of differentiation, the following uses an example in which the first core network device is represented by CN TA and the second core network device is represented by CN to describe a method, an implementation, or the like in embodiments of this application.
Correspondingly, C the second node may include any one or more of the following: a second core network device, a second access network device, and UE. The second access network device includes a base station (for example, a gNB), a CU, a DU, a CU-CP, and a CU-UP. For example, a difference between the first core network device and the second core network device lies in that the first core network device can perform a step performed by the first node, or can implement a related function that can be implemented by the first node. To be specific, the first core network device is configured to perform related management, control, or the like on a task, and the second core network device is configured to execute a specific task based on management or control of the first core network device. For example, if the first core network device is a TMF, the second core network device is an AMF, an SMF, a UPF, or the like. For another example, if the first core network device is an AMF, the second core network device may be an SMF, a UPF, or the like. For another example, when the first core network device is a specific AMF, the second core network device may be another AMF.
It should be noted that when the first node includes the first core network device, both the first access network device and the second access network device shown in this embodiment of this application may be centrally managed and controlled by the first core network device. Alternatively, it may be understood as that when the first node includes the first core network device, the first access network device and the second access network device may not be specifically distinguished in this embodiment of this application.
Implementation 2: The first node includes a first access network device.
The first node is deployed in the first access network device, the first node is included in the first access network device, or the first node includes the first access network device, or the like. A description of a relationship between the first access network device and the first node is not limited in this embodiment of this application.
It is assumed that both a CN network element and a RAN network element may be used as a second node. In this case, a CN computing power node, a RAN computing power node, and the like may all be centrally managed and/or controlled by the first access network device. For example, the first access network device may be the access network device in the operator network part shown in
For example, the first access network device may be one of a gNB (only an example), a CU, a DU, a CU-UP, or a CU-CP. For another example, the first access network device may be a network element that has a base station function and that has a feature of a specific type. For example, the first access network device may be a network element that can process an AI task and that has a base station function. For another example, the first access network device may be a specific base station, a specific CU, a specific CU-CP, or the like. It may be understood that the first access network device may also be referred to as a radio access network task anchor (RAN TA). For ease of differentiation, the following uses an example in which the first access network device is represented by RAN TA and the second access network device is represented by RAN to describe a method, an implementation, or the like in embodiments of this application.
Correspondingly, the second node may include any one or more of the following: a second core network device, a second access network device, and UE. The second access network device includes a base station (for example, a gNB), a CU, a DU, a CU-CP, and a CU-UP. It may be understood that the examples of the second access network device shown herein may not include the first access network device, to be specific, the second access network device may be another access network device other than the first access network device. The second core network device may include any one or more of an AMF, an SMF, a UPF, an NEF, an NRF, a PCF, UDM, an AF and an AUSF. It may be understood that, for related descriptions of the first access network device and the second access network device, refer to the descriptions of the first core network device and the second core network device. Details are not described herein. It may be understood that, for example, when the first access network device is a CU, the second node cannot be a CU but may be a DU, a CU-CP, a CU-UP, or the like. For another example, when the first access network device is a CU of a specific type, the second node may not be a CU of this specific type. For another example, when the first access network device is a gNB, the second access network device may be an eNB or the like. Examples are not described herein one by one.
It should be noted that when the first node includes the first access network device, both the first core network device and the second core network device shown in this embodiment of this application may be centrally managed and controlled by the first core network device. Alternatively, it may be understood as that when the first node includes the first access network device, the first core network device and the second core network device may not be specifically distinguished in this embodiment of this application.
Implementation 3: The first node includes a first core network device and a first access network device.
There may be two manners for understanding that the first node includes the first core network device and the first access network device: 1. The first node is divided into two parts, for example, a first part and a second part, based on different functions of the first node. For example, the first part of the first node is deployed on the first core network device, and the second part of the first node is deployed on the first access network device. For a manner of splitting based on the functions of the first node, refer to the foregoing description of dividing the base station into a CU, a DU, and the like. 2. Both the first core network device and the first access network device are first nodes. In other words, both the first core network device and the first access network device may be configured to implement a function and/or a step performed by the first node shown in embodiments of this application.
Because both the first core network device and the first access network device may be first nodes, it logically indicates that both the first core network device and the first access network device can implement a step, a function, or the like performed by the first node shown in embodiments of this application. Therefore, a core network corresponding to the CN TA and an access network corresponding to the RAN TA may be considered as different domains. In this case, other core network devices than the CN TA in network elements of the core network are centrally managed by the CN TA, and other access network devices than the RAN TA in the access network are centrally managed by the RAN TA. In this way, core network domain autonomy and access network domain autonomy, namely, domain-based autonomy, can be implemented. Optionally, the UE may be managed by both the first core network device and the first access network device, may be managed by only the first core network device, or may be managed by only the first access network device. Optionally, usually, the CN may simultaneously manage one or more RANs. Therefore, in this embodiment of this application, during domain-based autonomy, one CN TA may also simultaneously manage one or more RAN TAs. For example, the CN TA may separately allocate a task to the one or more RAN TAs.
To further understand the first node and the second node shown in embodiments of this application, for example, the first node and the second node may be understood as follows: The first node may be configured to manage and control a task (for example, manage and control a life cycle of a task, manage and control a resource of a task, decompose a task, or allocate a task), and the second node may be configured to execute a task. For example, the first node may be configured to manage and control a task resource and the like of the second node, to perform life cycle management and control on each task. For example, the second node may be configured to execute an inference task, an AI training task, or the like, to obtain an execution result. It may be understood that, for steps or functions that can be implemented by the first node and the second node, further refer to a method shown in the following.
Based on the first node and the second node shown above, as shown in
The first task is the AI-related task. In other words, the first node and the second node shown in this embodiment of this application may be configured to process the AI-related task. For example, a scope of AI may include any one or more of the following: language learning and processing, knowledge representation, intelligent search, inference, planning, machine learning, knowledge acquisition, combination scheduling, sensing, pattern recognition, a logical program design, soft computing, inexact and uncertain management, an artificial life, a neural network, a complex system, a genetic algorithm, a human thinking manner, and the like. For example, a type of the first task shown in this embodiment of this application may include any one or more of the following: an inference task, a training task, a learning task, a computing task, or a sensing task. It may be understood that for descriptions of the first node, the first core network device, and the first access network device, refer to the foregoing descriptions of
For specific descriptions of determining the first task by the first node, there are the following three different descriptions in this embodiment of this application.
In a first possible implementation, that a first node determines a first task includes any one or more of the following: The first node determines life cycle information of the first task, the first node determines resource information of the first task, the first node determines allocation information of the first task, or the first node determines decomposition information of the first task. That is, the first node determines any one or more of the life cycle information, the resource information, the allocation information, or the decomposition information of the first task.
For example, the life cycle information of the first task includes any one or more of the following: addition of the first task, modification of the first task, querying of the first task, reporting of the first task, migration of an execution node of the first task, and migration of a management node of the first task. For example, the addition of the first task may be understood as adding a configuration parameter of the first task or the like, for example, adding a configuration parameter of a computing task, an inference task, a sensing task, or the like. The modification of the first task may be understood as modification of a configuration parameter of an existing first task, for example, modifying a configuration parameter of an existing computing task, inference task, sensing task, or the like. The querying of the first task may be understood as querying a status, a progress, or the like of the first task, for example, querying a status or progress of an existing computing task, inference task, or sensing task. The reporting of the first task may be understood as reporting an execution result of the first task. For example, the first node may report the execution result of the first task to a trigger node. The migration of the execution node of the first task may be understood as migrating a node that executes the first task to the second node. The migration of the management node of the first task may be understood as migrating the first node and/or migrating a forwarding node. For example, when the first access network device performs cell handover, the management node of the first task may be handed over from a source base station to a target base station. It should be noted that a life cycle of the first task may further include deletion of the first task, and details are not described herein.
For example, that the first node determines resource information of the first task may also be understood as that the first node determines a resource configuration of the first task, or the first node manages a resource of the first task. The resource information of the first task includes any one or more of the following: computing power of the first task, an algorithm of the first task, data of the first task, or a connection of the first task. Usually, the computing power is a computing capability, and the computing power of the first task may be a value of computing power utilized for executing the first task. The algorithm is an algorithm model, for example, a single-point AI model, a joint AI model, model reconfiguration, segmentation point reconfiguration of an AI model, or AI model compression, that needs to be used when the first task is executed. The data of the first task may include a data security configuration of the first task, a data compression manner configuration of the first task, a data collection manner configuration of the first task, and the like. The connection of the first task may include establishment of a dedicated bearer of the first task, a dedicated protocol stack of the first task, a quality of service (quality of service, QoS) mechanism of the first task, and the like.
For example, the allocation information of the first task includes any one or more of the following: allocation of an execution node of the first task, allocation of an execution type of the execution node of the first task, and resource allocation of the execution node of the first task. The decomposition information of the first task includes any one or more of the following: a quantity of tasks obtained by decomposing the first task, computing power decomposition of the first task, and the like. A difference between allocation of the first task and decomposition of the first task lies in that the allocation of the first task is mainly allocation of a related resource of the first task, for example, allocation of the execution node of the first task, allocation of the execution type of the execution node of the first task, and allocation of a resource of the execution node of the first task. The decomposition of the first task is mainly decomposition of the first task itself, or decomposition of computing power of the first task. For example, the decomposition of the first task may include a quantity of tasks obtained by decomposing the first task and/or decomposition of the computing power of the first task. As shown in
In a second possible implementation, that a first node determines a first task includes the following step: The first node determines at least two second tasks of the first task, where the at least two second tasks correspond to at least two second nodes, and execution types of the at least two second nodes are coordinated execution, or execution types of the at least two second nodes are single-point execution.
For example, a quantity of second tasks is the same as a quantity of second nodes. For another example, a quantity of second tasks is greater than a quantity of second nodes. In other words, one second task or a plurality of second tasks may be allocated to one second node. For another example, a quantity of second tasks is less than a quantity of second nodes. In this case, the second task may be re-decomposed, and then the second node executes the re-decomposed second task.
For example, the single-point execution may also be referred to as single-side execution. For example, the second node independently executes the second task. From a perspective of a task, it may also be understood as that the second task is a single-side task. For example, the first node decomposes the first task into one or more second tasks, the one or more second tasks are respectively executed by corresponding second nodes, and each of the second nodes independently executes the second task. For example, a single-side task may be independently executed on a UE side, or may be independently executed on a gNB, a CU, a DU, a CU-CP, a CU-UP, or the like. An execution type of an adaptive modulation and coding (AMC) algorithm shown in
For example, the coordinated execution may be understood as that the second task needs to be executed by two or more second nodes in a coordinated manner. From a perspective of a task, it may also be understood as that the second task is a coordinated multipoint task (where the first task may also be referred to as a coordinated task). The coordinated execution includes coordinated two-point execution, coordinated three-point execution, and the like, which are not listed one by one herein. For example, a task that needs to be coordinately executed by a plurality of second nodes may include any one or more of coordinated computing (which may also be referred to as collaborative computing), coordinated inference, coordinated training, or coordinated sensing. The coordinated two-point in
For a network for AI (NET4AI) (to be specific, an AI task is processed by using a network) scenario, where for example, a 5G network is used as an example, for different RAN architectures (for example, a gNB, a CU-DU separation architecture, and a CU-CP and CU-UP separation architecture), the second node may include any one or more of UE, a gNB, a CU, a DU, a CU-CP, or a CU-UP. Therefore, according to different execution types of the second node, a plurality of types of the following manners may exist:
It may be understood that the foregoing coordinated execution is shown by using coordinated two-point execution as an example, and descriptions of coordinated three-point execution or coordinated four-point execution are not listed one by one herein. The coordinated execution shown in embodiments of this application may also be understood as joint execution, collaborative execution, or the like. This is not limited in embodiments of this application.
It should be noted that network elements in a core network may also coordinately execute the second task, or network elements in a core network and network elements in an access network may also coordinately execute the second task. Examples are not described one by one herein.
In a third possible implementation, that a first node determines a first task includes the following steps: The first node receives a trigger message from a trigger node, where the trigger message is used to trigger the first node to process the first task; and the first node determines the first task based on the trigger message. For example, the trigger message may include indication information of a task allocated to the first node. The indication information may enable the first node to learn of a task (for example, including the first task) allocated by the trigger node to the first node. It may be understood that the trigger node may trigger one or more first tasks. A quantity of first tasks is not limited in this embodiment of this application. For example, the trigger node includes a network artificial intelligence management and orchestration (NAMO) network element (which may also be referred to as a NAMO device, a NAMO device, or the like) or an operation and maintenance (OAM) network element. For another example, the trigger node includes any one or more of UE, a gNB, a CU, a DU, a CU-CP, a CU-UP, a RAN TA, an AMF, a UPF, an SMF, or a CN TA. Certainly, alternatively, the first node may independently determine that the first task needs to be processed. For another example, for the NET4AI scenario, the trigger node may be a third-party application, to meet a third-party specification by using a task resource or the like in a network. Optionally, the first node may send capability information of the first node to the trigger node, where the capability information of the first node indicates a capability of the first node to process an AI task. For example, the capability information of the first node may be a capability of the first node, or may be a sum of capabilities of the first node and an execution node managed and controlled by the first node. It may be understood that the capabilities shown herein may include resource capabilities related to an AI task, for example, a computing capability and an algorithm capability, which are not listed one by one herein. Optionally, the capability information of the first node may include a capability of each execution node managed by the first node, or may include a sum of capabilities of execution nodes. This is not limited in this embodiment of this application.
The first node may decompose the first task to obtain at least one second task, and then allocate the at least one second task to at least one second node. In other words, the first node may both decompose the first task and allocate at least one second task obtained through decomposition to an execution node. For example, the first node may decompose the first task based on any one or more of computing power, an algorithm, data, or a connection. Optionally, when the first node decomposes the first task to obtain one second task, it indicates that the second task is the first task. In other words, the first node may alternatively not decompose the first task, for example, directly allocate the first task to the second node and/or a forwarding node. Optionally, the first node may further separately allocate at least two first tasks to one second node or at least two second nodes. That is, when receiving two or more tasks, the first node may alternatively not allocate the tasks, but directly decompose the tasks to corresponding second nodes. Optionally, the first node may decompose the first task to obtain two or more second tasks. The first message includes the identification information of the second task, so that the second node can learn which task is to be executed by the second node. Optionally, the first message may further include identification information of the first task, and the identification information of the first task is included, so that the second node can learn that the second task is obtained by decomposing the first task. The first message includes the configuration information of the second task, so that the second node can execute the second task based on the configuration information of the second task. For example, the configuration information of the second task may include any one or more of the following: configuration of any one or more of computing power, an algorithm, data, or a connection of the second task: allocation of an execution node: resource allocation of the execution node: a decomposition status of the first task: or the like. It may be understood that the execution node of the second task shown herein is the second node. Optionally, the first message may include a task allocation status of a corresponding second node, or include a task allocation status of each second node. In other words, the first node may send the first message in a multicast (or broadcast) manner, or may send the first message in a unicast manner.
In a possible implementation, the first node may send the first message to the second node. Correspondingly, the second node receives the first message.
As shown in
For a figure at a left side of
It should be noted that, the first node may determine, based on some constraint conditions, the second node that executes the first task. For example, the first node may determine the second node based on conditions such as a network element type and/or an area. It may be understood that the descriptions of the constraint conditions are also applicable to the first node and the forwarding node, and the forwarding node and the second node that are shown below.
In another possible implementation, the first node may send the first message to the second node by using the forwarding node. Correspondingly, the second node receives the first message.
In this embodiment of this application, the forwarding node may be configured to implement a step performed by the first node, and may also be configured to implement a step performed by the second node. For another example, the forwarding node may have both a task execution function and a task managing and controlling function. For another example, the forwarding node may be configured to forward a message, or may be configured to perform secondary decomposition and/or allocation on a task, or the like. For example, the forwarding node receives the first message, and obtains the second task that needs to be executed by the forwarding node. However, because the forwarding node cannot complete the second task, the forwarding node may perform secondary decomposition on the second task to obtain a subtask of the second task, to allocate the subtask of the second task to a corresponding second node. That is, the forwarding node may re-encapsulate the first message received by the forwarding node, to obtain the encapsulated first message, and send the encapsulated first message to the second node. For example, the encapsulated first message may include one or more of identification information of a subtask of the second task, identification information of the second task, or configuration information of the subtask of the second task. It may be understood that a cause why the forwarding node cannot complete the second task may include that the forwarding node cannot complete the second task on time, that computing power of the forwarding node is insufficient to complete the second task, or the like. A cause why the forwarding node cannot complete the second task is not limited in this embodiment of this application. For example, the first node may be a CN TA, the forwarding node may be a RAN TA, and the second node may include a gNB, a CU, a DU, a CU-CP, a CU-UP, UE, or the like. For another example, the first node may be a DU, the forwarding node may be a CU, and the second node may include UE. For another example, the first node may be a CN TA, the forwarding node may be a RAN, and the second node may be UE, a gNB, a CU, a DU, or the like. Herein, the second node is merely an example. For another example, the first node is a CN TA, and the forwarding node may also be a CN TA (a CN TA different from the first node). For another example, the first node is a RAN TA, and the forwarding node may also be a RAN TA (a RAN TA different from the first node).
It should be noted that, if a message sent by the first node to the forwarding node and a message sent by the forwarding node to the second node are referred to as different messages, that the first node sends the first message to the second node by using the forwarding node may also be understood as the following: The first node sends the first message to the forwarding node, where the first message includes the identification information and/or the configuration information of the second task. Because the forwarding node cannot complete the second task on time, the forwarding node possibly decomposes the second task, and then obtains a subtask of the second task. For example, the forwarding node sends a third message to the second node, where the third message includes identification information and/or configuration information of the subtask of the second task. Certainly, the forwarding node may alternatively not decompose the second task, but directly allocate the second task to the second node.
As shown in
For example, the second node may execute the second task based on the configuration information of the second task. If the configuration information of the second task includes computing power of the second task, the second node may execute the second task based on a value of the computing power of the second task. For another example, if the configuration information of the second task includes an algorithm of the second task, the second node may execute the second task based on the algorithm of the second task. Details are not listed one by one herein.
In a possible implementation, the method shown in
The second node sends capability information to the first node. Correspondingly, the first node receives the capability information from the second node, where the capability information indicates a capability of the second node to execute a task. That a first node determines a first task includes the following: The first node determines the first task based on the capability information. In this embodiment of this application, capability negotiation may be performed between the first node and the second node, that is, the first node may allocate the second task based on the capability of the second node, thereby improving efficiency of allocating the second task by the first node. Optionally, the forwarding node may alternatively send capability information to the first node, so that the first node determines the first task based on the capability information of the forwarding node. It should be noted that the forwarding node may alternatively send capability information to the first node, so that the first node allocates a task, a resource, or the like to the forwarding node based on the capability information of the forwarding node. For example, the capability information sent by the forwarding node to the first node may indicate a capability of the forwarding node, or may indicate a sum of capabilities of the forwarding node and an execution node managed and controlled by the forwarding node. It may be understood that the capabilities shown herein may include resource capabilities related to an AI task, for example, a computing capability and an algorithm capability, which are not listed one by one herein. Optionally, the capability information of the forwarding node may include a capability of each execution node managed and controlled by the forwarding node, or may include a sum of capabilities of execution nodes. This is not limited in this embodiment of this application.
In a possible implementation, the method shown in
The first node sends a resource configuration request to the second node, where the resource configuration request is used to request to configure a resource for the second node. Correspondingly, after receiving the resource configuration request, the second node may accept the resource configuration request, or may reject the resource configuration request. For example, after the first node determines a life cycle of the first task, the first node may send the resource configuration request to the second node, where the resource configuration request is used to configure computing power, an algorithm, data, a connection, or the like for the second node. Optionally, the first node may alternatively send the resource configuration request to the forwarding node. It should be noted that the first node may alternatively send the resource configuration request to the forwarding node.
In a possible implementation, the method shown in
The second node sends a second message to the first node, where the second message indicates an execution result of the second task. Correspondingly, the first node receives the second message. For example, the execution result may include any one or more of an inference result, a computation result, a sensing result, a training result, or a learning result. Optionally, the second message may be an execution result of executing the second task by one second node. Optionally, the second message may be execution results of respectively executing the second task by a plurality of second nodes. To be specific, the execution results of the second task may be reported to the first node by second nodes respectively corresponding to the execution results, or may be reported to the first node by a specified second node. The specified second node may be specified by the first node, specified by the trigger node, specified in a protocol, or the like. This is not limited in this embodiment of this application. Alternatively, the execution results of the second task may be reported to another specified node, and the another specified node may apply the execution results, or may collectively report the execution results to the first node. Optionally, the second node may alternatively send the second message to the forwarding node. Correspondingly, the forwarding node receives the second message. Then, the forwarding node sends the second message to the first node. For example, the forwarding node may combine execution results of all second nodes, and then send the combined execution result to the first node. The first node combines execution results reported by all forwarding nodes, to obtain the execution result of the first task. It may be understood that descriptions of the forwarding node and the second message are also applicable below.
It should be noted that, the second node may alternatively not send the second message to the first node. For example, the second node may locally apply the execution result.
In a possible implementation, the method shown in
The first node receives at least two second messages, and combines execution results of the first task based on the at least two second messages, where the second message indicates the execution result of the second task. In this embodiment of this application, the first node may combine the execution results corresponding to all the second nodes, to obtain the execution result of the first task. It may be understood that combination of execution results of the first task shown in this embodiment of this application corresponds to decomposition of the first task. For example, as shown in
For example, as shown in (1) in
It should be noted that the first node may alternatively not decompose or allocate the first task. For example, the first node also has a computing function. If a computing capability of the first node can ensure that the task is completed within a specified QoS target, the first node may perform computing by itself, and does not need to deliver a task (for example, the first task) to the second node for execution. It should be noted that both the first node and the second node shown in this embodiment of this application may be nodes configured to implement steps or functions of a task anchor. For example, both the first node and the second node may be CN TAs or RAN TAs. Alternatively, the first node is a CN TA, and the second node is a RAN TA: or the first node is a RAN TA, and the second node is a CN TA.
It may be understood that the method shown in
As shown in
To further describe a difference between a task anchor (for example, the RAN TA and/or the CN TA) and a task execution point (for example, a RAN and/or a CN) in embodiments of this application, the following provides descriptions from a perspective of functions.
In this embodiment of this application, the RAN TA and/or the CN TA may manage a related resource (for example, computing power, an algorithm, data, a connection, sensing, and scheduling) at a granularity of a computing task, an AI task, a sensing task, or the like, to implement life cycle management and control (for example, by using management signaling and/or control signaling) on a related task. In addition, an AI for network optimization (AI4NET) function may be further implemented, and/or a network for AI (NET4AI) function may be further implemented; and the function is opened to the outside.
For example, as shown in
For example, the RAN and/or the CN may implement any one or more of the following functions:
In this embodiment of this application, the first core network device and/or the first access network device manages and controls an AI task. In this way, related management/control of the AI task is standardized, thereby effectively improving AI task management efficiency. In addition, the first core network device and/or the first access network device may more flexibly allocate a task by decomposing and/or allocating the first task, so that the second node can execute a task that matches a capability of the second node. This further improves AI task management efficiency, and further improves task execution efficiency of the second node.
According to the method provided in this embodiment of this application, a related processing procedure of the AI task is standardized. Compared with a method for managing and controlling a task by using program code, a case in which program code needs to be updated in real time is effectively avoided, thereby effectively improving AI task management efficiency.
The following describes the task processing method provided in this embodiment of this application by using an example in which the first node includes a CN TA and a RAN TA. For example, both the CN TA and the RAN TA may implement the method shown in
The following describes in detail an interface and a protocol stack between different network elements in embodiments of this application.
Optionally, as shown in
Optionally, as shown in
It may be understood that the TASK protocol layer shown in
Optionally, as shown in
Optionally, as shown in
Optionally, as shown in
Optionally, as shown in
It may be understood that, in the embodiments shown in
Optionally, a protocol layer for interaction between the CU and the UE may be located above RRC layer, or may be located in the RRC layer. For example, refer to
Optionally, as shown in (1) in
Optionally, as shown in (1) in
Optionally, as shown in
Optionally, as shown in (1) in
It may be understood that for descriptions of a protocol layer for interaction between gNBs, a protocol layer for interaction between a CU and a DU, and a protocol layer for interaction between a gNB and an AMF, refer to related descriptions in
It may be understood that, for a protocol layer for interaction between a first core network device and UE, refer to Case 1. Details are not described herein.
Optionally, as shown in
Optionally, as shown in
Optionally, as shown in
It should be noted that interaction between the first core network device and the UE and interaction between the first access network device and the UE in this embodiment of this application are also applicable to interaction between a second core network device and the UE and interaction between a second access network device and the UE. Interaction between the gNB and the AMF is also applicable to interaction between other network elements of a similar protocol stack, and interaction between the gNB and the UE is also applicable to interaction between other network elements of a similar protocol stack. Details are not listed one by one in this embodiment of this application. The protocol stack between the network elements shown above is merely an example. Any implementation that can be combined with another implementation or two other implementations or the like in different implementations shown above falls within the protection scope of this application.
Optionally, the RAN TA may trigger the RAN TA to process the first task. In this case, the RAN TA may not generate the trigger message, but directly perform step 1402. Optionally, the RAN TA may receive a trigger message sent by a trigger node. For a description of the trigger node, refer to the foregoing description of step 301. Details are not described herein.
Optionally, the RAN TA decomposes the first task to obtain a second task, where an execution node of the second task may be the RAN TA. Optionally, the execution node of the second task may alternatively be another access network device, for example, a gNB, a CU, a DU, a CU-CP, a CU-UP, or a RAN TA. Optionally, the execution node of the second task may be UE. Optionally, the execution node of the second task may be a CN TA or the like. It may be understood that for descriptions of the execution node, refer to the foregoing descriptions of the second node and/or the forwarding node. Details are not described herein. It may be understood that the trigger node in step 1401 may be the same as or different from the second node in step 1402. This is not limited in this embodiment of this application.
Optionally, execution nodes may respectively report respective execution results of the execution nodes to the RAN TA. Optionally, the execution nodes may further report the execution results together to another node, and then the another node reports the execution results together to the RAN TA. For example, the another node may be indicated to the execution nodes by using the first message. For another example, the trigger node specifies the another node. For another example, the another node may be preconfigured, for example, preconfigured in a protocol, preconfigured by an OAM, or the like. Optionally, the RAN TA may send a combined execution result to the trigger node, or the RAN TA may directly apply the execution result locally, or the like. This is not limited in this embodiment of this application. It may be understood that after obtaining the execution result, the execution node may alternatively not report the execution result, but directly apply the execution result locally. This is not limited in this embodiment of this application.
In this embodiment of this application, the RAN TA decomposes and/or allocates the AI task, and the like, so that an interface between different network elements in an access network can be effectively used.
It may be understood that, during domain-based autonomy, one CN TA may alternatively simultaneously manage one or more RAN TAs. For example, one CN TA may separately allocate tasks to the one or more RAN TAs. Optionally, the one or more RAN TAs may perform secondary decomposition on the task, and the like.
It may be understood that for specific descriptions of
In this embodiment of this application, the CN TA decomposes and/or allocates the AI task, and the like. Because the CN TA may manage and control one or more access networks, the CN TA may manage and control a relatively large quantity of execution nodes and/or forwarding nodes.
The foregoing mainly describes the task processing method by using an example in which the first node includes the first core network device and/or the first access network device. The following describes the method provided in embodiments of this application by using an example in which the first node includes a NAMO and/or an OAM.
The NAMO shown in embodiments of this application may be understood as a central AI service flow orchestrator, and may be configured to orchestrate and decompose a service, and the like. The NAMO may alternatively be understood as a task dispatcher (which may also be referred to as a pod manager, a task allocator, or the like). The second node may be understood as a task execution node (for example, may also be referred to as a pod, a task executor, or the like). That is, each second node delivers a task by using a dispatcher of the task, and executes the task, to implement distributed parallel computing. However, in this embodiment of this application, when allocating a task, the NAMO may send a first message to a CN TA and/or a RAN TA, and then the CN TA and/or the RAN TA is used as a second-level management node to manage a network element and UE in a domain of the CN TA and/or the RAN TA. In other words, a network element and UE in each domain of the CN TA and/or the RAN TA are invisible to the NAMO. That is, the NAMO may be understood as the first node shown above, the RAN TA and/or the CN TA may be understood as the forwarding node shown above, and the network element in the domain corresponding to the RAN TA (which may also be understood as another network element other than the RAN TA in the RAN) and/or the network element in the domain corresponding to the CN TA (which may also be understood as another network element other than the CN TA in the CN) may be understood as the second node shown above. In addition, the NAMO may be understood as a first-level task anchor, and the RAN TA and/or the CN TA may be understood as a second-level task anchor or a first-level task execution point. If a relationship between a pod and a pod manager is used for description, the NAMO serves as a first-level pod manager to control a RAN TA and/or a CN TA that serves as a pod; and the RAN TA and/or the CN TA serve/serves as a second-level pod manager to control another network element or UE in a CN/RAN that serve/serves as a pod, performs secondary decomposition on a task allocated by the NAMO, and delivers the task to all pod nodes managed by the NAMO, and the pod node executes the task delivered by the RAN TA and/or the CN TA, and reports the task to the second-level pod manager for combination. It may be understood that, for specific descriptions of the CN TA and/or the RAN TA, refer to the foregoing description. Details are not described herein. For example, the RAN TA may be a gNB or a CU in a 5G system, a network element that is newly defined in a 6G system and that has a base station function, or the like. For another example, the CN TA may be an AMF in a 5G system, one newly defined network element in a 6G system, or the like. According to embodiments of this application, the RAN TA and/or the CN TA may more flexibly and dynamically select an execution node based on a service target and QoS. It may be understood that the descriptions of the NAMO herein are also applicable to the OAM. Details are not described in the following.
For example, as shown in
The following uses an example shown in
As shown in
For example, the xNB1 decomposes the job to obtain one or more subtasks (represented by, for example, task). As shown in
It may be understood that IDs of the two tasks obtained through decomposition by the xNB1 may be the same or may be different. When the IDs of the two tasks are the same, the xNB1 may distinguish between the two tasks based on IDs of task execution nodes.
It may be understood that step 1507 shown in
It may be understood that for specific descriptions of
It may be understood that
In this embodiment of this application, the first task may be any one of a collaborative computing task, a collaborative inference task, a collaborative training task, a collaborative sensing task, or the like. A type of the first task is not limited in this embodiment of this application. For example, sensing, as a new application of 5G and 6G, gives a base station more functions and values. For example, the base station may sense information such as an environment, a channel, and a building around the base station through terahertz imaging. Alternatively, the base station may have a function similar to a radar, and can track information (such as a location, an angle, a distance, and a moving speed) of a moving object/target in real time through radar wave reflection. However, because a sensing capability of a single base station is limited, sometimes sensing information of a larger area needs to be obtained. In this case, a task needs to be decomposed to more base stations for sensing. In addition, because a geographical location of the base station is fixed, if a large quantity of UEs can also participate in the task for joint sensing, sensing information can be enriched and sensing accuracy can be improved. For the collaborative sensing task, the xNB1 may allocate the job to more xNBs and more UEs. For the collaborative training task, when receiving execution results of the xNB2 and the UE, the xNB1 may update a model based on the two execution results, to report a final result to the NAMO. For different tasks, steps in which the xNB1 combines execution results may be different, and are not listed one by one in this embodiment of this application.
Compared with a case in which the NAMO manages and controls all network elements (which may also be understood as that the NAMO manages all nodes, which belongs to a flat management manner), in this embodiment of this application, the NAMO manages first-level pods (which belongs to a vertical management manner), and these first-level pods serve as pod managers and manage pod nodes (task nodes shown in
From a method procedure shown in
In this embodiment of this application, a multi-level management manner (as shown in
Usually, an OAM can directly manage the CN and/or the RAN (that is, a network element in the CN and/or a network element in the RAN) through an interface-north (Itf-N) or an interface-south (Itf-S). Therefore, in this embodiment of this application, the interface may alternatively be reused, so that the OAM can implement the steps or functions performed by the first node shown above. To be specific, the OAM shown in this embodiment of this application may implement a task management and orchestration function similar to those of the NAMO, and use network elements and interfaces in 3GPP to form a standardized network, to support life cycle management and execution of a task, and the like. In other words, in this embodiment of this application, the OAM needs to manage, schedule, and allocate resources such as computing power, an algorithm, data, a connection, and sensing, to ensure smooth execution of a task (for example, meeting a task QoS target). In addition, the OAM includes a management interface, where the management interface may support task management and/or configuration, and the like.
As shown in
For example, as shown in
In this embodiment of this application, the management plane OAM network element can implement the steps or functions performed by the first node shown above, and can further schedule a plurality of resources (such as computing power, an algorithm, data, a connection, and sensing), orchestrate a service stream, decompose a task, allocate a task, and combine execution results, thereby ensuring smooth execution of an AI task.
A difference between
It should be noted that the first node shown above may alternatively be UE, or UE that has a specific type of feature, or the like, and the second node may be UE, an access network device, or a core network device. For a description that the first node is the UE, refer to the foregoing description. Details are not described one by one herein.
A communication apparatus provided in an embodiment of this application is described below.
In this application, the communication apparatus is divided into function modules based on the foregoing method embodiments. For example, each function module may be divided to each corresponding function, or two or more functions may be integrated into one processing module. The integrated module may be implemented in a form of hardware, or may be implemented in a form of a software functional module. It should be noted that, in this application, module division is an example, and is merely a logical function division. During actual implementation, another division manner may be used. The following describes in detail communication apparatuses in embodiments of this application with reference to
In some embodiments of this application, the communication apparatus may be the first node, a chip in the first node, or the like shown above. To be specific, the communication apparatus may be configured to perform steps, functions, or the like performed by the first node in the foregoing method embodiments.
The processing unit 1701 is configured to determine a first task, where the communication apparatus includes a first core network device and/or a first access network device, and the first task is a task related to artificial intelligence AI.
The transceiver unit 1702 is configured to send a first message, where the first message includes identification information of a second task and/or configuration information of the second task, the second task is a subtask of the first task, the second task is executed by a second node, and the second node is one of determined nodes configured to execute the first task.
In this embodiment of this application, for descriptions of the first task, the first message, the first node, the second node, and the like, refer to the descriptions in the foregoing method embodiments. Details are not described herein.
It may be understood that specific descriptions of the transceiver unit and the processing unit described in this embodiment of this application are merely examples. For specific functions, steps, or the like of the transceiver unit and the processing unit, refer to the foregoing method embodiments. Details are not described herein. For example, the transceiver unit 1702 may be further configured to perform the receiving step and/or the sending step shown in
The transceiver unit 1702 is configured to receive a first message from a first node, where the first message includes identification information of a second task and/or configuration information of the second task, the second task is a subtask of the first task, the first task is a task related to artificial intelligence AI, and the first node includes a first core network device and/or a first access network device.
The processing unit 1701 is configured to execute the second task.
In this embodiment of this application, for descriptions of the first task, the first message, the first node, the second node, and the like, refer to the descriptions in the foregoing method embodiments. Details are not described herein.
It may be understood that specific descriptions of the transceiver unit and the processing unit described in this embodiment of this application are merely examples. For specific functions, steps, or the like of the transceiver unit and the processing unit, refer to the foregoing method embodiments. Details are not described herein. For example, the processing unit 1701 may be further configured to perform the execution step shown in
For example, the transceiver unit 1702 is configured to receive a first message from a first node.
The processing unit 1701 is configured to decompose and/or allocate a second task based on the first message. It may be understood that the steps performed by the forwarding node are not listed one by one herein.
The foregoing describes the nodes in embodiments of this application, and the following describes possible product forms of the nodes. It should be understood that any product that has the functions of the nodes described in
In a possible implementation, in the communication apparatus shown in
As shown in
For example, when the communication apparatus is configured to implement the steps, the methods, or the functions performed by the first node, the processor 1820 is configured to determine a first task, where the communication apparatus includes a first core network device or a first access network device, and the first task is a task related to artificial intelligence AI. The transceiver 1810 is configured to send a first message, where the first message includes identification information of a second task and/or configuration information of the second task, the second task is a subtask of the first task, the second task is executed by a second node, and the second node is one of determined nodes configured to execute the first task.
For example, when the communication apparatus is configured to implement the steps, the methods, or the functions performed by the second node, the transceiver 1810 is configured to receive a first message from a first node, where the first message includes identification information of a second task and/or configuration information of the second task, the second task is a subtask of the first task, the first task is a task related to artificial intelligence AI, and the first node includes a first core network device and/or a first access network device.
For example, when the communication apparatus is configured to implement the steps, the methods, or the functions performed by the forwarding node, the transceiver 1810 is configured to receive a first message, and the processor 1820 is configured to decompose and/or allocate a second task, and the like.
It may be understood that for specific descriptions of the processor and the transceiver, refer to the descriptions of the processing unit and the transceiver unit shown in
In various implementations of the communication apparatus shown in
Optionally, the communication apparatus 180 may further include one or more memories 1830, configured to store program instructions and/or data. The memory 1830 is coupled to the processor 1820. The coupling in this embodiment of this application may be an indirect coupling or a communication connection between apparatuses, units, or modules in an electrical form, a mechanical form, or another form, and is used for information exchange between the apparatuses, the units, or the modules. The processor 1820 may cooperate with the memory 1830 to perform an operation. The processor 1820 may execute the program instructions stored in the memory 1830. Optionally, at least one of the one or more memories may be included in the processor.
A specific connection medium between the transceiver 1810, the processor 1820, and the memory 1830 is not limited in this embodiment of this application. In this embodiment of this application, the memory 1830, the processor 1820, and the transceiver 1810 are connected by using a bus 1840 in
In this embodiment of this application, the processor may be a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field programmable gate array or another programmable logic device, a discrete gate or a transistor logic device, a discrete hardware component, or the like. The processor can implement or execute the methods, the steps, and the logical block diagrams disclosed in embodiments of this application. The general-purpose processor may be a microprocessor or any conventional processor or the like. The steps of the methods disclosed in combination with embodiments of this application may be directly implemented by a hardware processor, or may be implemented by using a combination of hardware and software modules in the processor, or the like.
In this embodiment of this application, the memory may include but is not limited to a nonvolatile memory, for example, a hard disk drive (HDD) or a solid-state drive (SSD), a random access memory (RAM), an erasable programmable read-only memory (EPROM), a read-only memory (OM), or a portable read-only memory (CD-ROM). The memory is any storage medium that can be used to carry or store program code in a form of instructions or a data structure and that can be read and/or written by a computer (for example, the communication apparatus shown in this application). However, this application is not limited thereto. The memory in this embodiment of this application may alternatively be a circuit or any other apparatus that can implement a storage function, and is configured to store the program instructions and/or the data.
The processor 1820 is mainly configured to: process a communication protocol and communication data, control an entire communication apparatus, execute a software program, and process data of the software program. The memory 1830 is mainly configured to store the software program and the data. The transceiver 1810 may include a control circuit and an antenna. The control circuit is mainly configured to: perform a conversion between a baseband signal and a radio frequency signal, and process the radio frequency signal. The antenna is mainly configured to send and receive a radio frequency signal in a form of an electromagnetic wave. An input/output apparatus, for example, a touchscreen, a display, or a keyboard, is mainly configured to: receive data input by a user and output data to the user.
For example, after the communication apparatus is powered on, the processor 1820 may read the software program in the memory 1830, interpret and execute instructions of the software program, and process the data of the software program. When the data needs to be sent wirelessly, the processor 1820 performs baseband processing on the to-be-sent data, and outputs a baseband signal to a radio frequency circuit. The radio frequency circuit performs radio frequency processing on the baseband signal, and then sends out the radio frequency signal through the antenna in a form of an electromagnetic wave. When data is sent to the communication apparatus, the radio frequency circuit receives a radio frequency signal through the antenna, converts the radio frequency signal into a baseband signal, and outputs the baseband signal to the processor 1820, and the processor 1820 converts the baseband signal into data, and processes the data.
In another implementation, the radio frequency circuit and the antenna may be disposed independent of the processor that performs baseband processing. For example, in a distributed scenario, the radio frequency circuit and the antenna may be remotely disposed independent of the communication apparatus.
It may be understood that the communication apparatus shown in this embodiment of this application may alternatively include more components than those shown in
In another possible implementation, in the communication apparatus shown in
In this embodiment of this application, the logic circuit and the interface may be further coupled to each other. A specific manner of connection between the logical circuit and the interface is not limited in this embodiment of this application.
For example, when the communication apparatus is configured to implement the methods, the functions, or the steps performed by the first node, the logic circuit 1901 is configured to determine a first task, and the interface 1902 is configured to output a first message.
For example, when the communication apparatus is configured to implement the methods, the functions, or the steps performed by the second node, the interface 1902 is configured to input a first message, and the logic circuit 1901 is configured to execute a second task.
For example, when the communication apparatus is configured to implement the methods, the functions, or the steps performed by the forwarding node, the interface 1902 is configured to input a first message, and the logic circuit 1901 is configured to decompose and/or allocate a second task.
It may be understood that the communication apparatus shown in embodiments of this application may implement the method provided in embodiments of this application in a form of hardware, or may implement the method provided in embodiments of this application in a form of software. This is not limited in embodiments of this application.
For specific implementations of the embodiments shown in
An embodiment of this application further provides a wireless communication system. The wireless communication system includes a second node and a first node. The second node and the first node may be configured to perform the method in any one of the foregoing embodiments. Optionally, the wireless communication system further includes a forwarding node.
In addition, this application further provides a computer program. The computer program is configured to implement the operations and/or the processing performed by the second node in the method provided in this application.
This application further provides a computer program. The computer program is configured to implement the operations and/or the processing performed by the first node in the method provided in this application.
This application further provides a computer-readable storage medium. The computer-readable storage medium stores computer code. When the computer code is run on a computer, the computer is enabled to perform the operations and/or the processing performed by the second node in the method provided in this application.
This application further provides a computer-readable storage medium. The computer-readable storage medium stores computer code. When the computer code is run on a computer, the computer is enabled to perform the operations and/or the processing performed by the first node in the method provided in this application.
This application further provides a computer program product. The computer program product includes computer code or a computer program. When the computer code or the computer program is run on a computer, the operations and/or the processing performed by the second node in the method provided in this application is enabled to be performed.
This application further provides a computer program product. The computer program product includes computer code or a computer program. When the computer code or the computer program is run on a computer, the operations and/or the processing performed by the first node in the method provided in this application is enabled to be performed.
In the several embodiments provided in this application, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the described apparatus embodiment is merely an example. For example, division into the units is merely logical function division and may be other division during actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces, indirect couplings or communication connections between the apparatuses or units, or electrical connections, mechanical connections, or connections in other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected based on actual design specifications to achieve the technical effects of the solutions provided in the embodiments in this application.
In addition, functional units in embodiments of this application may be integrated into one processing unit, each of the units may exist alone physically, or two or more units may be integrated into one unit. The integrated unit may be implemented in a form of hardware, or may be implemented in a form of a software functional unit.
When the integrated unit is implemented in the form of the software functional unit and sold or used as an independent product, the integrated unit may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of this application essentially, or the part contributing to the conventional technologies, or all or some of the technical solutions may be implemented in a form of a software product. The software product is stored in a storage medium and includes a plurality of instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or some of the steps of the methods described in embodiments of this application. The readable storage medium includes any medium that can store program code, such as a USB flash drive, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disc.
The foregoing descriptions are merely specific implementations of this application, but are not intended to limit the protection scope of this application. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in this application shall fall within the protection scope of this application. Therefore, the protection scope of this application shall be subject to the protection scope of the claims.
Number | Date | Country | Kind |
---|---|---|---|
202111447706.9 | Nov 2021 | CN | national |
This application is a continuation of International Application No. PCT/CN2022/135052, filed on Nov. 29, 2022, which claims priority to Chinese Patent Application No. 202111447706.9, filed on Nov. 30, 2021. The disclosures of the aforementioned applications are hereby incorporated by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2022/135052 | Nov 2022 | WO |
Child | 18677299 | US |