This disclosure relates to the field of communications technologies, and in particular, to a scheduling latency determining method and apparatus.
The 3rd generation partnership project (3GPP) release 10 (R10) proposes a carrier aggregation (CA) technology, and introduces the CA technology into a long term evolution advanced (LTE-A) cellular communications system. Carrier aggregation means that a network simultaneously schedules (scheduling) and sends downlink data to a single user equipment (UE) by using a plurality of component carriers (CC), or grant the UE to simultaneously send uplink data to the network on a plurality of CCs. In contrast, in a conventional LTE system, a network can schedule, by using only one CC, UE to receive downlink data or grant the UE to send uplink data on one CC. Therefore, due to the introduction of CA, an uplink throughput and a downlink throughput of the UE increase exponentially with an increase of a quantity of CCs.
Currently, 3GPP is working on standardization of 5th generation (5G) new radio (NR), and CA is further enhanced in NR as an important technical point. In LTE-Advanced, a maximum bandwidth of a CC is 20 MHz. In NR, a maximum bandwidth of a CC may be up to 100 MHz.
However, when a plurality of CCs are simultaneously activated, power consumption of UE greatly increases. Therefore, when the plurality of CCs are activated, how to reduce the power consumption of the UE is a technical problem that should be urgently resolved in a future communications system.
This disclosure provides a scheduling latency determining method and apparatus, to reduce power consumption of UE.
To achieve the foregoing objective, the following technical solutions are used in embodiments of this disclosure.
According one embodiment, this disclosure provides a scheduling latency determining method, including: sending, by a terminal apparatus, information about one or more component carrier CC groups to a communications apparatus, where the information about the one or more CC groups is used by the communications apparatus to determine N CC groups, each of the N CC groups includes one or more CCs, and N is a positive integer; receiving, by the terminal apparatus, a scheduling latency set sent by the communications apparatus; and determining, by the terminal apparatus based on the scheduling latency set, a scheduling latency of scheduling a data channel corresponding to a control channel by using the control channel.
This disclosure provides a scheduling latency determining method. A terminal apparatus sends information about one or more CC groups to a communications apparatus, so that the communications apparatus configures a scheduling latency set for the terminal apparatus based on the CC group. Therefore, the terminal apparatus may determine, based on the scheduling latency set, a scheduling latency of scheduling a data channel corresponding to a control channel by using the control channel. In this way, when a CC is not scheduled, the terminal apparatus may disable a module required for processing the CC, for example, disable a radio frequency (RF) module and a baseband module, or the terminal apparatus may indicate the module required for processing the CC to be in a low power mode. In other words, when the terminal apparatus schedules data on one or more CCs, the terminal apparatus may disable a processing module corresponding to a CC group other than CC groups in which the one or more CCs are located in N CC groups, thereby reducing power consumption of the terminal apparatus.
In one embodiment, the method provided in this disclosure further includes: receiving, by the terminal apparatus, one or more CCs configured by the communications apparatus; and classifying, by the terminal apparatus, the one or more CCs into one or more CC groups based on power consumption of the one or more CCs, where power consumption corresponding to one or more CCs included in any one of the one or more CC groups is less than power consumption corresponding to a plurality of CCs included in any two or more of the one or more CC groups. The terminal apparatus classifies one or more CCs into one or more CC groups based on power consumption corresponding to each CC. Because power consumption corresponding to one or more CCs included in any one of the one or more CC groups is less than power consumption corresponding to a plurality of CCs included in any two or more of the one or more CC groups, when scheduling a CC in any CC group, the communications apparatus may disable a module for processing a CC included in a remaining CC group, thereby reducing power consumption of the terminal apparatus.
In one embodiment, the scheduling latency set includes a first scheduling latency, and the determining, by the terminal apparatus based on the scheduling latency set, a scheduling latency of scheduling a data channel corresponding to a control channel by using the control channel includes: if the terminal apparatus determines that the control channel is sent on a first CC and the scheduled data channel corresponding to the control channel is sent on the first CC, determining, by the terminal apparatus, that a scheduling latency of scheduling, by using the control channel sent on the first CC, the data channel that is corresponding to the control channel and that is sent on the first CC is the first scheduling latency, where the first CC is a CC included in any one of the N CC groups.
In one embodiment, the scheduling latency set includes a second scheduling latency, and the determining, by the terminal apparatus based on the scheduling latency set, a scheduling latency of scheduling a data channel corresponding to a control channel by using the control channel includes: if the terminal apparatus determines that the control channel is sent on a first CC and the scheduled data channel corresponding to the control channel is sent on a second CC, and that the first CC and the second CC belong to a same CC group, determining, by the terminal apparatus, that a scheduling latency of scheduling, by using the control channel sent on the first CC, the data channel that is corresponding to the control channel and that is sent on the second CC is the second scheduling latency, where the second scheduling latency is greater than a first scheduling latency.
In one embodiment, the scheduling latency set includes a third scheduling latency, and the determining, by the terminal apparatus based on the scheduling latency set, a scheduling latency of scheduling a data channel corresponding to a control channel by using the control channel includes: if the terminal apparatus determines that the control channel is sent on a first CC and the scheduled data channel corresponding to the control channel is sent on a third CC, and that the first CC and the third CC belong to different CC groups, determining, by the terminal apparatus, that a scheduling latency of scheduling, by using the control channel sent on the first CC, the data channel that is corresponding to the control channel and that is sent on the third CC is the third scheduling latency, where the third scheduling latency is greater than a first scheduling latency.
In one embodiment, the third scheduling latency is obtained by using the first scheduling latency and a preset latency, or the third scheduling latency is obtained through configuration by using higher layer signaling.
In one embodiment, the method provided in this disclosure further includes: receiving, by the terminal apparatus, first signaling sent by the communications apparatus on the first CC, where the first signaling is used to indicate the terminal apparatus to dynamically activate or dynamically deactivate the second CC, and the first CC and the second CC belong to a same CC group.
In one embodiment, the method provided in this disclosure further includes: receiving, by the terminal apparatus, second signaling sent by the communications apparatus, where the second signaling includes a media access control control element MAC CE, and the second signaling is used to indicate the terminal apparatus to activate or deactivate all CCs in any one of the N CC groups.
According to one embodiment, this disclosure provides a scheduling latency determining method, including: receiving, by a communications apparatus, information that is about one or more component carrier CC groups and that is sent by a terminal apparatus; determining, by the communications apparatus, N component carrier CC groups based on the information about the one or more component carrier CC groups, where each of the N CC groups includes one or more CCs, and N is a positive integer; and sending, by the communications apparatus, a scheduling latency set to the terminal apparatus, where the scheduling latency set is used to determine a scheduling latency of scheduling a data channel corresponding to a control channel by using the control channel.
In one embodiment, the scheduling latency set includes a first scheduling latency, the scheduling latency set is used by the terminal apparatus to determine that a scheduling latency of scheduling, by using the control channel sent on a first CC, the data channel that is corresponding to the control channel and that is sent on the first CC is the first scheduling latency, the first CC is a CC included in any one of the N CC groups, and the first scheduling latency is greater than or equal to 0.
In one embodiment, the scheduling latency set includes a second scheduling latency, the second scheduling latency is greater than the first scheduling latency, the scheduling latency set is used to indicate the terminal apparatus to determine that a scheduling latency of scheduling, by using the control channel sent on a first CC, the data channel that is corresponding to the control channel and that is sent on a second CC is the second scheduling latency, and the first CC and the second CC belong to a same CC group.
In one embodiment, the scheduling latency set includes a third scheduling latency, the third scheduling latency is greater than the first scheduling latency, the scheduling latency set is used to indicate the terminal apparatus to determine that a scheduling latency of scheduling, by using the control channel sent on a first CC, the data channel that is corresponding to the control channel and that is sent on a third CC is the third scheduling latency, and the first CC and the third CC belong to different CC groups.
In one embodiment, the method provided in this disclosure further includes: sending, by the communications apparatus, first signaling to the terminal apparatus on the first CC, where the first signaling is used to indicate the terminal apparatus to dynamically activate or dynamically deactivate the second CC, and the first CC and the second CC belong to a same CC group.
In one embodiment, the method provided in this disclosure further includes: sending, by the communications apparatus, second signaling to the terminal apparatus, where the second signaling includes a media access control control element MAC CE, and the second signaling is used to indicate the terminal apparatus to activate or deactivate all CCs in any one of the N CC groups.
In one embodiment, power consumption corresponding to one or more CCs included in any one of the N CC groups is less than power consumption corresponding to a plurality of CCs included in any two or more of the N CC groups.
According to one embodiment, this disclosure provides a scheduling latency determining apparatus. The scheduling latency determining apparatus may be a terminal device, and the scheduling latency determining apparatus may implement the scheduling latency determining method described in any one of the embodiments described herein. For example, the scheduling latency determining apparatus may be a terminal device or a chip applied to the terminal device. The scheduling latency determining apparatus may implement the foregoing method by using software or hardware, or by executing corresponding software by hardware.
In one embodiment, the scheduling latency determining apparatus includes: a sending unit, configured to send information about one or more component carrier CC groups to a communications apparatus, where the information about the one or more CC groups is used by the communications apparatus to determine N CC groups, each of the N CC groups includes one or more CCs, and N is a positive integer; a receiving unit, configured to receive a scheduling latency set sent by the communications apparatus; and a determining unit, configured to determine, based on the scheduling latency set, a scheduling latency of scheduling a data channel corresponding to a control channel by using the control channel.
In one embodiment, the receiving unit is further configured to receive one or more CCs configured by the communications apparatus; and the determining unit is further configured to classify the one or more CCs into one or more CC groups based on power consumption corresponding to the one or more CCs, where power consumption corresponding to one or more CCs included in any one of the one or more CC groups is less than power consumption corresponding to a plurality of CCs included in any two or more of the one or more CC groups.
In one embodiment, the scheduling latency set includes a first scheduling latency, and the determining unit is specifically configured to: if determining that the control channel is sent on a first CC and the scheduled data channel corresponding to the control channel is sent on the first CC, determine that a scheduling latency of scheduling, by using the control channel sent on the first CC, the data channel that is corresponding to the control channel and that is sent on the first CC is the first scheduling latency, where the first CC is a CC included in any one of the N CC groups.
In one embodiment, the scheduling latency set includes a second scheduling latency, and the determining unit is specifically configured to: if determining that the control channel is sent on a first CC and the scheduled data channel corresponding to the control channel is sent on a second CC, and that the first CC and the second CC belong to a same CC group, determine that a scheduling latency of scheduling, by using the control channel sent on the first CC, the data channel that is corresponding to the control channel and that is sent on the second CC is the second scheduling latency, where the second scheduling latency is greater than a first scheduling latency.
In one embodiment, the scheduling latency set includes a third scheduling latency, and the determining unit is specifically configured to: if determining that the control channel is sent on a first CC and the scheduled data channel corresponding to the control channel is sent on a third CC, and that the first CC and the third CC belong to different CC groups, determine that a scheduling latency of scheduling, by using the control channel sent on the first CC, the data channel that is corresponding to the control channel and that is sent on the third CC is the third scheduling latency, where the third scheduling latency is greater than a first scheduling latency.
In one embodiment, the third scheduling latency is obtained by using the first scheduling latency and a preset latency, or the third scheduling latency is obtained through configuration by using higher layer signaling.
In one embodiment, the receiving unit is further configured to receive first signaling sent by the communications apparatus on the first CC, where the first signaling is used to indicate to dynamically activate or dynamically deactivate the second CC, and the first CC and the second CC belong to a same CC group.
In one embodiment, the receiving unit is further configured to receive second signaling sent by the communications apparatus, where the second signaling includes a media access control control element MAC CE, and the second signaling is used to indicate to activate or deactivate all CCs in any one of the N CC groups.
According to one embodiment, this disclosure provides a scheduling latency determining apparatus. The scheduling latency determining apparatus may be a network device, and the scheduling latency determining apparatus may implement the scheduling latency determining method described in any one of the embodiments described herein. For example, the scheduling latency determining apparatus may be a network device or a chip applied to the network device. The scheduling latency determining apparatus may implement the foregoing method by using software or hardware, or by executing corresponding software by hardware.
In one embodiment, the scheduling latency determining apparatus includes: a receiving unit, configured to receive information that is about one or more component carrier CC groups and that is sent by a terminal apparatus; a determining unit, configured to determine N component carrier CC groups based on the information about the one or more component carrier CC groups, where each of the N CC groups includes one or more CCs, and N is a positive integer; and a sending unit, configured to send a scheduling latency set to the terminal apparatus, where the scheduling latency set is used to determine a scheduling latency of scheduling a data channel corresponding to a control channel by using the control channel.
In one embodiment, the scheduling latency set includes a first scheduling latency, the scheduling latency set is used by the terminal apparatus to determine that a scheduling latency of scheduling, by using the control channel sent on a first CC, the data channel that is corresponding to the control channel and that is sent on the first CC is the first scheduling latency, the first CC is a CC included in any one of the N CC groups, and the first scheduling latency is greater than or equal to 0.
In one embodiment, the scheduling latency set includes a second scheduling latency, the second scheduling latency is greater than the first scheduling latency, the scheduling latency set is used to indicate the terminal apparatus to determine that a scheduling latency of scheduling, by using the control channel sent on a first CC, the data channel that is corresponding to the control channel and that is sent on a second CC is the second scheduling latency, and the first CC and the second CC belong to a same CC group.
In one embodiment, the scheduling latency set includes a third scheduling latency, the third scheduling latency is greater than the first scheduling latency, the scheduling latency set is used to indicate the terminal apparatus to determine that a scheduling latency of scheduling, by using the control channel sent on a first CC, the data channel that is corresponding to the control channel and that is sent on a third CC is the third scheduling latency, and the first CC and the third CC belong to different CC groups.
In one embodiment, the sending unit is further configured to send first signaling to the terminal apparatus on the first CC, where the first signaling is used to indicate the terminal apparatus to dynamically activate or dynamically deactivate the second CC, and the first CC and the second CC belong to a same CC group.
In one embodiment, the sending unit is further configured to send second signaling to the terminal apparatus, where the second signaling includes a media access control control element MAC CE, and the second signaling is used to indicate the terminal apparatus to activate or deactivate all CCs in any one of the N CC groups.
In one embodiment, power consumption corresponding to one or more CCs included in any one of the N CC groups is less than power consumption corresponding to a plurality of CCs included in any two or more of the N CC groups.
According one embodiment, this disclosure provides a chip, and the chip includes a processor and an interface circuit. The interface circuit is coupled to the processor, the processor is configured to run a computer program or an instruction, to implement the method described in any one of the embodiments described herein, and the interface circuit is configured to communicate with a module other than the chip.
According to one embodiment, this disclosure provides a chip, and the chip includes a processor and an interface circuit. The interface circuit is coupled to the processor, the processor is configured to run a computer program or an instruction, to implement the method described in any one of the embodiments described herein, and the interface circuit is configured to communicate with a module other than the chip.
According to one embodiment, this disclosure provides a computer readable storage medium. The computer readable storage medium stores a computer program or an instruction, and when the computer program or the instruction is run, the method described in any one of the embodiments described herein is implemented.
According to one embodiment, this disclosure provides a computer readable storage medium. The computer readable storage medium stores a computer program or an instruction, and when the computer program or the instruction is run, the method described in any one of the embodiments described herein is implemented.
According to one embodiment, this disclosure provides a computer program product including an instruction. The computer program product stores an instruction, and when the instruction is run, a terminal device is enabled to perform the method described in any one of the embodiments described herein.
According to one embodiment, this disclosure provides a computer program product including an instruction. The computer program product stores an instruction, and when the instruction is run, a network device is enabled to perform the method described in any one of the embodiments described herein.
According to one embodiment, this disclosure provides a communications system. The communications system includes the terminal device described in any one of the embodiments described herein.
The terms “first”, “second”, and the like in the embodiments of this disclosure are merely intended to distinguish between different objects, and are not intended to limit a sequence thereof. For example, a first component carrier and a second component carrier are intended to distinguish between different component carriers, and are not intended to limit a sequence thereof.
The term “and/or” in the embodiments of this disclosure describes only an association relationship for describing associated objects and indicates that three relationships may exist. For example, A and/or B may represent the following three cases: Only A exists, both A and B exist, and only B exists. In addition, the character “/” in the embodiments of this disclosure generally indicates an “or” relationship between the associated objects.
It should be noted that the word “example” or “for example” in the embodiments of this disclosure is used to represent giving an example, an illustration, or a description. Any embodiment or design scheme described as an “example” or “for example” in the embodiments of this disclosure should not be explained as being more preferred or having more advantages than another embodiment or design scheme. Exactly, use of the word “example” or “for example” or the like is intended to present a relative concept in a specific manner.
A network architecture and a service scenario described in the embodiments of this disclosure are intended to describe the technical solutions in the embodiments of this disclosure more clearly, and do not constitute a limitation on the technical solutions provided in the embodiments of this disclosure. A person of ordinary skill in the art may learn that with evolution of the network architecture and emergence of a new service scenario, the technical solutions provided in the embodiments of this disclosure are also applicable to similar technical problems.
The terminal device may alternatively be referred to as user equipment (UE), an access terminal, a subscriber unit, a subscriber station, a mobile station, a mobile console, a remote station, a remote terminal, a mobile device, a user terminal, a terminal, a wireless communication device, a user agent, or a user apparatus. The terminal device may be a station (STA) in a wireless local area network (WLAN), and may be a cellular phone, a cordless phone, a session initiation protocol (session initiation protocol, SIP) phone, a wireless local loop (WLL) station, a personal digital assistant (PDA) device, a handheld device having a wireless communication function, a computing device, another processing device connected to a wireless modem, a vehicle-mounted device, a wearable device, and a next-generation communications system, for example, a terminal device in a fifth-generation (5G) communications network, or a terminal device in a future evolved public land mobile network (PLMN).
In an example, in the embodiments of this disclosure, the terminal device may be alternatively a wearable device. The wearable device may alternatively be referred to as a wearable intelligent device, and is a collective name of wearable devices, such as glasses, gloves, a watch, clothes, and shoes, obtained after a wearable technology is used for intelligent design and development of daily wear. The wearable device is a portable device that is directly worn or integrated into clothes or an accessory of a user. The wearable device not only is a hardware device, but also implements a strong function through software support, data exchange, and cloud exchange. Generic wearable intelligent devices include a large full-featured wearable device, such as a smartwatch or smart glasses, capable of implementing all or some functions without relying on a smartphone; and a wearable device, such as various smart bands performing vital sign monitoring and smart jewelry, that concentrates only on a particular type of application function and should be used together with another device such as a smartphone.
The network device may be a device configured to communicate with a terminal device. The network device may be an access point (AP) in a WLAN or a base transceiver station (BTS) in global system for mobile communication (GSM) or code division multiple access (CDMA), or may be a NodeB NB) in wideband code division multiple access (WCDMA), or may be an evolved NodeB (eNB or eNodeB) in long term evolution (LTE), or a regeneration station or an access point, or a vehicle-mounted device, a wearable device, a network device in a future 5G network, a network device in a future evolved PLMN network, or the like.
In addition, in the embodiments of this disclosure, the network device serves a cell, and the terminal device communicates with the network device by using a transmission resource (for example, a frequency domain resource or a time-frequency resource) used in the cell. The cell may be a cell corresponding to the network device (for example, a base station). The cell may belong to a macro base station, or may belong to a base station corresponding to a small cell. The small cell herein may include a metro cell, a micro cell, a pico cell, a femto cell, and the like. These small cells are characterized by a small coverage area and low transmit power, and are suitable to provide a high-rate data transmission service.
A method and an apparatus provided in the embodiments of this disclosure may be applied to a terminal device, and the terminal device includes a hardware layer, an operating system layer running above the hardware layer, and an application layer running above the operating system layer. The hardware layer includes hardware such as a central processing unit (CPU), a memory management unit (MMU), and a memory (also referred to as a main memory). The operating system may be any one or more computer operating systems, for example, a Linux operating system, a Unix operating system, an Android operating system, an iOS operating system, or a Windows operating system, implementing service processing by using a process. The application layer includes applications such as a browser, an address book, word processing software, and instant messaging software. In addition, in the embodiments of this disclosure, a specific structure of a body for performing a scheduling latency determining method is not particularly limited in the embodiments of this disclosure, provided that communication can be performed according to the scheduling latency determining method in the embodiments of this disclosure by running a program of code that records the scheduling latency determining method in the embodiments of this disclosure. For example, the scheduling latency determining method in the embodiments of this disclosure may be performed by a terminal device, or a function module that is in the terminal device and that can invoke a program and execute the program.
In addition, aspects or features in the embodiments of this disclosure may be implemented as a method, an apparatus, or a product that uses standard programming and/or engineering technologies. The term “product” used in the embodiments of this disclosure covers a computer program that can be accessed from any computer readable component, carrier or medium. For example, the computer readable medium may include but is not limited to: a magnetic storage component (for example, a hard disk, a floppy disk or a magnetic tape), an optical disc (for example, a compact disc (CD), a digital versatile disc (DVD), a smart card and a flash memory component (for example, an erasable programmable read-only memory (EPROM), a card, a stick, or a key drive). In addition, various storage media described in this specification may indicate one or more devices and/or other machine readable media that are configured to store information. The term “machine readable media” may include but is not limited to a radio channel, and various other media that can store, contain, and/or carry an instruction and/or data.
A future access network may be implemented by using a cloud radio access network (C-RAN) architecture. Therefore, in a possible manner, a protocol stack architecture and function of a conventional base station is divided into two parts: a centralized unit (CU) and a distributed unit (DU). The CU and the DU are actually deployed in a relatively flexible manner. For example, some of CUs of a plurality of base stations are integrated together to constitute a relatively large function entity.
Such protocol layer division is merely an example, and division may be alternatively performed at another protocol layer, for example, at the RLC layer. Functions of the RLC layer and a protocol layer above the RLC layer are configured for the CU, and a function of a protocol layer below the RLC layer is configured for the DU. Alternatively, division is performed in a protocol layer. For example, some functions of the RLC layer and a function of a protocol layer above the RLC layer are configured for the CU, and a remaining function of the RLC layer and a function of a protocol layer below the RLC layer are configured for the DU. In addition, division may be alternatively performed in another manner, for example, division is performed based on a latency. A function whose processing time should meet a latency requirement is configured for the DU, and a function whose processing time does not should meet the latency requirement is configured for the CU.
In addition, the radio frequency apparatus may not be disposed in the DU but is remotely disposed from the DU, or may be integrated into the DU, or may be partially disposed remotely from the DU and partially integrated into the DU. This is not limited herein.
In addition, still referring to
In the foregoing network architecture, signaling/data generated by the CU may be sent to a terminal device by using the DU, or signaling/data generated by the terminal device may be sent to the CU by using the DU. The DU may transparently transmit the signaling/data to the terminal device or the CU by directly encapsulating the signaling/data by using a protocol layer without parsing the signaling/data. If the signaling/data is transmitted between the DU and the terminal device in the following embodiments, sending or receiving of the signaling/data by the DU includes this scenario. For example, signaling of the RRC or PDCP layer is finally processed as signaling/data of a physical layer (PHY) to be sent to the terminal device, or is converted from received signaling/data of the PHY layer. In this architecture, it may be considered that the signaling/data of the RRC or PDCP layer is sent by the DU or by the DU and the radio frequency.
In the foregoing embodiment, the CU is divided into a network device in a RAN. In addition, the CU may be alternatively divided into a network device in a CN. This is not limited herein.
S101. A terminal apparatus sends information about one or more component carrier CC groups (group) to a communications apparatus.
Optionally, the information about the one or more CC groups is used by the communications apparatus to determine N CC groups.
For example, the terminal apparatus in this embodiment of this disclosure may be the terminal device shown in
Specifically, in this embodiment of this disclosure, before the terminal apparatus performs S101, the method further includes: the terminal apparatus receives one or more CCs configured by the communications apparatus, and the terminal apparatus classifies the one or more CCs into one or more CC groups according to a preset rule.
Specifically, in this embodiment of this disclosure, the communications apparatus may configure the one or more CCs for the terminal apparatus in connected-mode radio resource control (RRC) signaling.
Specifically, the terminal apparatus may classify the one or more CCs into the one or more CC groups based on an implementation of the terminal apparatus.
In an example, as shown in
Because power consumption of the terminal apparatus for processing one or more CCs is different, the terminal apparatus may classify the one or more CCs into one or more CC groups based on power consumption of the terminal apparatus for processing each CC. That is, power consumption of the terminal apparatus corresponding to one or more CCs in a same CC group meets a preset condition. For example, power consumption of processing one or more CCs in a same CC group differs slightly, and power consumption of the terminal apparatus for processing a plurality of CCs in different CC groups differs greatly.
Specifically, if the terminal apparatus determines that power consumption of the terminal apparatus for processing any two or more of the one or more CCs meets the preset condition, the terminal apparatus determines that the any two or more CCs belong to a same CC group. The preset condition includes: the power consumption of the terminal apparatus corresponding to the any two or more CCs is the same, or a difference between the power consumption of the terminal apparatus corresponding to the any two or more CCs is less than or equal to a preset error.
For example, for the UE 2 shown in
The information about the one or more CC groups includes information about each CC group and information about one or more CCs included in each CC group.
Specifically, the information about each CC group is used to identify each CC group, and the information about the one or more CCs is used to identify the one or more CCs. The information about each CC group may be an identifier of each CC group, and the information about the one or more CCs includes an identifier of each of the one or more CCs.
It should be noted that CCs included in different CC groups correspond to different power consumption of the terminal apparatus. For example, if the terminal apparatus works in a first CC group, power consumption is lower, or if the terminal apparatus works on both a CC in the first CC group and a CC in a second CC group, power consumption of the terminal apparatus is higher.
S102. The communications apparatus receives the information that is about the one or more component carrier CC groups (group) and that is sent by the terminal apparatus.
Specifically, the communications apparatus in this embodiment of this disclosure may be the network device shown in
S103. The communications apparatus determines the N CC groups based on the information about the one or more component carrier CC groups.
N is a positive integer, and each of the N CC groups includes one or more CCs.
In a possible implementation, in this embodiment of this disclosure, the communications apparatus may directly determine, as the N CC groups, one or more CC groups fed back based on the information that is about the one or more CC groups and that is fed back by the terminal apparatus.
For example, the information that is about the one or more CC groups and that is fed back by the terminal apparatus includes information about a first CC group, information about a second CC group, and information about a third CC group. The first CC group includes a CC 1, a CC 2, and a CC 3, the second CC group includes a CC 4, a CC 5, and a CC 6, and the third CC group includes a CC 7, a CC 8, and a CC 9. In this case, the communications apparatus may determine that the N CC groups include the first CC group, the second CC group, and the third CC group, where the first CC group includes the CC 1, the CC 2, and the CC 3, the second CC group includes the CC 4, the CC 5, and the CC 6, and the third CC group includes the CC 7, the CC 8, and the CC 9.
In another possible implementation, in this embodiment of this disclosure, the communications apparatus may retune a CC in each CC group based on the information that is about the one or more CC groups and that is fed back by the terminal apparatus, to determine the N CC groups.
For example, the information that is about the one or more CC groups and that is fed back by the terminal apparatus includes information about a first CC group, information about a second CC group, and information about a third CC group. The first CC group includes a CC 1, a CC 2, and a CC 3, the second CC group includes a CC 4, a CC 5, and a CC 6, and the third CC group includes a CC 7, a CC 8, and a CC 9. In this case, the communications apparatus may determine that the N CC groups include the first CC group and the second CC group, where the first CC group includes the CC 1, the CC 2, the CC 3, the CC 4, and the CC 5, and the second CC group includes the CC 6, the CC 7, the CC 8, and the CC 9.
It should be noted that when the communications apparatus retunes and determines the N CC groups based on the information that is about the one or more CC groups and that is fed back by the terminal apparatus, the communications apparatus should send information about the determined N CC groups to the terminal apparatus.
S104. The communications apparatus sends a scheduling latency set to the terminal apparatus.
The scheduling latency set is used to determine a scheduling latency of scheduling a data channel corresponding to a control channel by using the control channel.
S105. The terminal apparatus receives the scheduling latency set sent by the communications apparatus.
S106. The terminal apparatus determines, based on the scheduling latency set, the scheduling latency of scheduling the data channel corresponding to the control channel by using the control channel.
Optionally, the control channel in this embodiment of this disclosure may be a physical downlink control channel (PDCCH), and the data channel may be a physical downlink shared channel (PDSCH).
This disclosure provides a scheduling latency determining method. A terminal apparatus sends information about one or more CC groups to a communications apparatus, so that the communications apparatus determines a scheduling latency set for the terminal apparatus based on the CC group, and sends the scheduling latency set to the terminal apparatus. Therefore, the terminal apparatus may determine, based on the scheduling latency set, a scheduling latency of scheduling a data channel corresponding to a control channel by using the control channel. In this way, when a CC is not scheduled before a scheduling latency of the CC is reached, the terminal apparatus may disable a module required for processing the CC, for example, disable an RF module and a baseband module, or the terminal apparatus may indicate the module required for processing the CC to be in a low power mode. In other words, when the terminal apparatus schedules data on one or more CCs, the terminal apparatus may disable a processing module corresponding to a CC group other than CC groups in which the one or more CCs are located in N CC groups, thereby reducing power consumption of the terminal apparatus.
To reduce power consumption of the terminal apparatus, the communications apparatus configures different scheduling latencies for different CCs based on the information that is about the one or more CC groups and that is fed back by the terminal apparatus. In this way, the terminal apparatus can determine the scheduling latency of scheduling the data channel corresponding to the control channel by using the control channel. The following separately describes different scheduling scenarios in which the terminal apparatus determines, based on the scheduling latency set, the scheduling latency of scheduling the data channel corresponding to the control channel by using the control channel.
Scenario 1: The communications apparatus sends the control channel and schedules the data channel corresponding to the control channel on one CC in any CC group.
Because the communications apparatus schedules any CC in any CC group, the terminal apparatus determines, in a same manner, a scheduling latency corresponding to the any scheduled CC. The following uses a first CC group in the N CC groups and a first CC in the first CC group as an example for description. The first CC group is any one of the N CC groups, and the first CC is any CC in the first CC group, and has no indication information meaning.
In a possible implementation, as shown in
S1061. If the terminal apparatus determines that the control channel is sent on the first CC and the scheduled data channel corresponding to the control channel is sent on the first CC, the terminal apparatus determines that a scheduling latency of scheduling, by using the control channel sent on the first CC, the data channel that is corresponding to the control channel and that is sent on the first CC is the first scheduling latency, where the first CC is a CC included in any one of the N CC groups.
Specifically, a scheduling latency in this embodiment of this disclosure is a time difference between an end of a last symbol of the control channel and a first symbol of the data channel corresponding to the control channel.
For example, the control channel is a PDCCH, and the data channel is a PDSCH. The scheduling latency may be understood as a time difference between an end of a last symbol symbol of the PDCCH and a start of a first symbol of the PDSCH corresponding to the PDCCH.
For example, as shown in
Scenario 2: The communications apparatus performs cross-CC scheduling in a same CC group. That is, the communications apparatus sends the control channel on one CC and schedules the data channel corresponding to the control channel on another CC in the same CC group.
For the N CC groups, when the communications apparatus sends the control channel on one CC and schedules the data channel corresponding to the control channel on another CC in any one of the N CC groups, the terminal apparatus determines, in a same manner, a scheduling latency of scheduling the data channel corresponding to the control channel by using the control channel sent on the one CC. Therefore, the following uses a first CC and a second CC included in a first CC group as an example.
In another possible implementation, as shown in
S1062. If the terminal apparatus determines that the control channel is sent on the first CC and the scheduled data channel corresponding to the control channel is sent on the second CC, and that the first CC and the second CC belong to a same CC group, the terminal apparatus determines that a scheduling latency of scheduling, by using the control channel sent on the first CC, the data channel that is corresponding to the control channel and that is sent on the second CC is the second scheduling latency, where the second scheduling latency is greater than a first scheduling latency.
Specifically, in this embodiment of this disclosure, the second scheduling latency is obtained by the terminal device based on the first scheduling latency and a first time amount.
For example, the terminal apparatus determines, as the second scheduling latency, a scheduling latency obtained by using the first scheduling latency and the first time amount.
Specifically, the first time amount is greater than or equal to a startup (warm up) time of a baseband module corresponding to the second CC.
For example, the baseband module includes an equalization module, a channel estimation module, a demodulation module, and a decoding module.
For example, if the first scheduling latency is T1, and the first time amount is t, the second scheduling latency is T1+t.
For example, as shown in
Scenario 3: The communications apparatus schedules any quantity of CCs across CC groups, that is, the communications apparatus sends the control channel on one CC in one of the N CC groups and schedules the data channel corresponding to the control channel on one CC in another CC group.
In still another possible implementation, as shown in
S1063. If the terminal apparatus determines that the control channel is sent on a first CC and the scheduled data channel corresponding to the control channel is sent on a third CC, and that the first CC and the third CC belong to different CC groups, the terminal apparatus determines that a scheduling latency of scheduling, by using the control channel sent on the first CC, the data channel that is corresponding to the control channel and that is sent on the third CC is the third scheduling latency, where the third scheduling latency is greater than a first scheduling latency.
The third scheduling latency is a preset latency or is obtained through configuration by using higher layer signaling. Specifically, the third scheduling latency is obtained by the terminal apparatus by adding the first scheduling latency to a time for the terminal apparatus to enable or retune an RF module required for processing the third CC. In an example, the third scheduling latency is obtained by the terminal apparatus by adding the first scheduling latency to a radio frequency (RF) retune time required for the third CC.
For example, as shown in
As shown in
Optionally, in this embodiment of this disclosure, power consumption of the terminal apparatus for processing any quantity of CCs in any one of the N CC groups is lower than power consumption of the terminal apparatus for processing any quantity of CCs in any two or more of the N CC groups.
In another possible embodiment of this disclosure, as shown in
S107. The communications apparatus sends first signaling to the terminal apparatus.
Optionally, the first signaling is sent on the first CC, the first signaling is used to indicate the terminal apparatus to dynamically activate or dynamically deactivate the second CC, and the first CC and the second CC belong to a same CC group.
The dynamic activation in this embodiment of this disclosure means that an activated CC is quickly activated by using physical layer signaling. The dynamic deactivation means that a CC in an activated mode is quickly deactivated by using physical layer signaling.
S108. The terminal apparatus receives the first signaling, and the terminal apparatus dynamically activates or dynamically deactivates the second CC based on the first signaling.
In an example, after the second CC is quickly activated by using the first signaling, the second scheduling latency of scheduling the PDSCH on the second CC by using the PDCCH on the first CC may be the first scheduling latency configured by the communications apparatus, namely, K0.
In still another possible embodiment of this disclosure, as shown in
S109. The communications apparatus sends second signaling to the terminal apparatus.
The second signaling includes a medium access control (MAC) control element (CE), and the second signaling is used to activate or deactivate CCs in any two or more different CC groups of the N CC groups. In this embodiment of this disclosure, the communications apparatus configures the terminal apparatus to activate or deactivate the CCs in the any two or more different CC groups of the N CC groups by using the MAC CE. Because CC deactivation or activation is MAC CE activation or deactivation performed in units of CC groups, a time of activating or deactivating a CC by the terminal apparatus can be shortened.
S110. The terminal apparatus receives the second signaling, and the terminal device activates or deactivates the CCs in the any two or more different CC groups of the N CC groups based on the second signaling by using the MAC CE.
In an example, after the terminal apparatus activates the second CC group based on the second signaling, the third scheduling latency of scheduling the PDSCH on the third CC in the second CC group by using the PDCCH on the first CC in the first CC group may be the first scheduling latency configured by the communications apparatus, for example, K0. Therefore, the time for the terminal apparatus to enable or retune the RF module for processing the third CC is not required.
For example, as shown in
The foregoing mainly describes the solutions provided in the embodiments of this disclosure from a perspective of interaction between network elements. It can be understood that to implement the foregoing functions, the network elements, for example, the terminal apparatus and the communications apparatus, include a corresponding hardware structure and/or a corresponding software module that perform/performs the functions. A person of ordinary skill in the art should easily be aware that, in combination with the examples described in the embodiments disclosed in this specification, units, and algorithm blocks may be implemented by hardware or a combination of hardware and computer software. Whether a function is performed by hardware or hardware driven by computer software depends on particular applications and design constraints of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of the embodiments of this disclosure.
In the embodiments of this disclosure, the terminal apparatus and the communications apparatus may be divided into function modules based on the foregoing method examples. For example, each function module may be obtained through division based on each function, or two or more functions may be integrated into one processing module. The integrated module may be implemented in a form of hardware, or may be implemented in a form of a software functional module. It should be noted that in the embodiments of this disclosure, division into modules is an example, and is merely a logical function division. In actual implementation, another division manner may be used. The following uses an example in which the function modules are obtained through division based on corresponding functions.
When an integrated unit is used,
The sending unit 101 is configured to support the terminal apparatus in performing S101 in the foregoing embodiments. The receiving unit 102 is configured to support the terminal apparatus in performing S105, S108, and S110 in the foregoing embodiments. The determining unit 103 is configured to support the terminal apparatus in performing S106, S1061, S1062, and S1063 in the foregoing embodiments. All related content of the blocks in the foregoing method embodiments may be cited in function descriptions of the corresponding function modules. Details are not described herein again.
Based on hardware implementation, the sending unit 101 in this embodiment of this disclosure may be a transmitter of the terminal device shown in
When an integrated unit is used,
The processing module 112 may be a processor or a controller. For example, the processor/controller may be a central processing unit, a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or another programmable logical device, a transistor logical device, a hardware component, or any combination thereof. The processor/controller may implement or execute various example logical blocks, modules, and circuits described with reference to content disclosed in the present disclosure. Alternatively, the processor may be a combination of processors implementing a computing function, for example, a combination of one or more microprocessors, or a combination of the digital signal processor and a microprocessor. The communications module 113 may be a transceiver, a transceiver circuit, a communications interface, or the like. The storage module 111 may be a memory.
When the processing module 112 is a processor 120, the communication module 113 is a communications interface 130 or a transceiver, and the storage module 111 is a memory 140, the terminal apparatus in this embodiment of this disclosure may be a device shown in
The communications interface 130, the at least one processor 120, and the memory 140 are connected to each other through the bus 110. The bus 110 may be a PCI bus, an EISA bus, or the like. The bus may be classified into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is used to represent the bus in
When an integrated unit is used,
Based on hardware implementation, the receiving unit 201 in this embodiment of this disclosure may be a receiver of the network device shown in
When an integrated unit is used,
Optionally, the communications apparatus may further include a storage module 211, configured to store program code and data of the communications apparatus.
The processing module 212 may be a processor or controller, for example, the processor/controller may be a central processing unit, a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or another programmable logical device, a transistor logical device, a hardware component, or any combination thereof. The processor/controller may implement or execute various example logical blocks, modules, and circuits described with reference to content disclosed in the present disclosure. Alternatively, the processor may be a combination of processors implementing a computing function, for example, a combination of one or more microprocessors, or a combination of the digital signal processor and a microprocessor. The communications module 213 may be a transceiver, a transceiver circuit, a communications interface, or the like. The storage module 211 may be a memory.
When the processing module 212 is a processor 220, the communication module 213 is a communications interface 230 or a transceiver, and the storage module 211 is a memory 210, the communications apparatus in this embodiment of this disclosure may be a device shown in
The communications interface 230, the at least one processor 220, and the memory 210 are interconnected through a bus 200. The bus 200 may be a PCI bus, an EISA bus, or the like. The bus may be classified into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is used to represent the bus in
The foregoing receiving unit (or a unit configured for receiving) is an interface circuit of the apparatus, configured to receive a signal from another apparatus. For example, when the apparatus is implemented through a chip, the receiving unit is an interface circuit that is of the chip and that is configured to receive a signal from another chip or apparatus. The foregoing sending unit (or a unit configured for sending) is an interface circuit of the apparatus, configured to send a signal to another apparatus. For example, when the apparatus is implemented through a chip, the sending unit is an interface circuit that is of the chip and that is configured to send a signal to another chip or apparatus.
Optionally, the chip 150 further includes a memory 1550. The memory 1550 may include a read-only memory and a random access memory, and provide an operation instruction and data for the processor 1510. A part of the memory 1550 may further include a non-volatile random access memory (NVRAM).
In some implementations, the memory 1550 stores the following elements: an executable module or a data structure, or a subset thereof, or an extended set thereof:
In this embodiment of this disclosure, a corresponding operation is performed by invoking the operation instruction (the operation instruction may be stored in an operating system) stored in the memory 1550.
In a possible implementation, structures of chips used by the network device and the terminal device are similar, and different apparatuses may use different chips to implement respective functions.
The processor 1510 controls operations of the network device and the terminal device. The processor 1510 may alternatively be referred to as a central processing unit (CPU). The memory 1550 may include a read-only memory and a random access memory, and provide an instruction and data for the processor 1510. A part of the memory 1550 may further include a non-volatile random access memory (NVRAM). Specifically, during application, the memory 1550, the interface circuit 1530, and the memory 1550 are coupled together through a bus system 1520. The bus system 1520 may further include a power bus, a control bus, a status signal bus, and the like in addition to a data bus. However, for clear description, various types of buses in
The methods disclosed in the embodiments of this disclosure may be applied to the processor 1510, or may be implemented by the processor 1510. The processor 1510 may be an integrated circuit chip and has a signal processing capability. In an implementation process, blocks in the foregoing methods can be implemented by using a hardware integrated logical circuit in the processor 1510, or by using instructions in a form of software. The processor 1510 may be a general purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or another programmable logic device, a discrete gate or a transistor logic device, or a discrete hardware component. The processor 1510 may implement or perform the methods, the blocks, and logical block diagrams that are disclosed in the embodiments of this disclosure. The general purpose processor may be a microprocessor, or the processor may be any conventional processor or the like. Blocks of the methods disclosed with reference to the embodiments of this disclosure may be directly executed and accomplished by a hardware decoding processor, or may be executed and accomplished by a combination of hardware and a software module in a decoding processor. The software module may be located in a mature storage medium in the art, such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, an electrically erasable programmable memory, or a register. The storage medium is located in the memory 1550, and the processor 1510 reads information in the memory 1550 and completes the blocks in the foregoing methods in combination with hardware of the processor.
Optionally, the interface circuit 1530 is configured to perform receiving and sending blocks of the communications apparatus and the terminal apparatus in the embodiments shown in
The processor 1510 is configured to perform processing blocks of the communications apparatus and the terminal apparatus in the embodiments shown in
In the foregoing embodiments, the instruction that is stored in the memory and that is to be executed by the processor may be implemented in a form of a computer program product. The computer program product may be written into the memory in advance, or may be downloaded and installed in the memory in the form of software.
The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the procedure or functions according to the embodiments of this disclosure are all or partially generated. The computer may be a general-purpose computer, a dedicated computer, a computer network, or other programmable apparatuses. The computer instructions may be stored in a computer readable storage medium or may be transmitted from a computer readable storage medium to another computer readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center to another website, computer, server, or data center in a wired (for example, a coaxial cable, an optical fiber, or a digital subscriber line (DSL)) or wireless (for example, infrared, radio, or microwave) manner. The computer storage medium may be any usable medium accessible by a computer, or a data storage device, such as a server or a data center, integrating one or more usable media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, DVD), a semiconductor medium (for example, a solid-state drive Solid State Disk, SSD), or the like.
According to an aspect, a computer storage medium is provided. The computer readable storage medium stores an instruction. When the instruction is run, a terminal apparatus is enabled to perform S106, S1061, S1062, S1063, S105, S108, and S110 in the embodiments, and/or another process performed by the terminal apparatus in the technology described in this specification.
According to another aspect, a computer storage medium is provided. The computer readable storage medium stores an instruction. When the instruction is run, a communications apparatus is enabled to perform S103, S102, S104, S107, and S109 in the embodiments, and/or another process performed by the communications apparatus in the technology described in this specification.
According to an aspect, a computer program product including an instruction is provided. The computer program product stores the instruction. When the instruction is run, a terminal apparatus is enabled to perform S106, S1061, S1062, S1063, S105, S108, and S110 in the embodiments, and/or another process performed by the terminal apparatus in the technology described in this specification.
According to another aspect, a computer program product including an instruction is provided. The computer program product stores the instruction. When the instruction is run, a communications apparatus is enabled to perform S103, S102, S104, S107, and S109 in the embodiments, and/or another process performed by the communications apparatus in the technology described in this specification.
According to an aspect, a chip is provided. The chip is applied to a terminal device, the chip includes at least one processor and an interface circuit, and the interface circuit is coupled to the at least one processor. The processor is configured to run a computer program or an instruction, to perform S106, S1061, S1062, S1063, S105, S108, and S110 in the embodiments, and/or another process performed by the terminal device in the technology described in this specification.
According to another aspect, a chip is provided. The chip is applied to a network device, the chip includes at least one processor and an interface circuit, and the interface circuit is coupled to the at least one processor. The processor is configured to run a computer program or an instruction, to perform S103, S102, S104, S107, and S109 in the embodiments, and/or another process performed by the network device in the technology described in this specification.
In addition, an embodiment of this disclosure further provides a communications system. The relay system includes the terminal apparatus shown in
A person of ordinary skill in the art may be aware that, in combination with the examples described in the embodiments disclosed in this specification, units and algorithm blocks may be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether the functions are performed by hardware or software depends on particular applications and design constraint conditions of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of the embodiments of this disclosure.
It may be clearly understood by a person skilled in the art that, for convenient and brief description, for a detailed working process of the foregoing system, apparatus, and unit, refer to a corresponding process in the foregoing method embodiments. Details are not described herein again.
In the several embodiments provided in this disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the described apparatus embodiments are merely examples. For example, the division into units is merely a logical function division. In actual implementation, another division manner may be used. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented by using some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected based on actual requirements to achieve the objectives of the solutions of the embodiments.
In addition, function units in the embodiments of this disclosure may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units are integrated into one unit.
When functions are implemented in the form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer readable storage medium. Based on such an understanding, the technical solutions of the embodiments of this disclosure essentially, or the part contributing to the prior art, or some of the technical solutions may be implemented in a form of a software product. The software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or some of the blocks of the methods described in the embodiments of this disclosure. The foregoing storage medium includes: any medium that can store program code, such as a USB flash drive, a removable hard disk, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk, or an optical disc.
The foregoing descriptions are merely specific implementations of this disclosure, but are not intended to limit the protection scope of this disclosure. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in this disclosure shall fall within the protection scope of this disclosure. Therefore, the protection scope of this disclosure shall be subject to the protection scope of the claims.
Number | Date | Country | Kind |
---|---|---|---|
201810150929.0 | Feb 2018 | CN | national |
This application is a continuation of International Application No. PCT/CN2019/074952, filed on Feb. 13, 2019, which claims priority to Chinese Patent Application No. 201810150929.0, filed on Feb. 13, 2018. The disclosures of the aforementioned applications are hereby incorporated by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2019/074952 | Feb 2019 | US |
Child | 16991807 | US |