WIRELESS PROJECTION METHOD AND APPARATUS

Information

  • Patent Application
  • 20230336742
  • Publication Number
    20230336742
  • Date Filed
    June 28, 2023
    a year ago
  • Date Published
    October 19, 2023
    a year ago
Abstract
Embodiments of this application provide a wireless projection method and apparatus, and relate to the communication field, to reduce image tearing and freezing and image quality deterioration occurring in wireless projection, and improve user experience. The wireless projection method is applied to a Wi-Fi communication system, and includes: An electronic device transmits a scalable video coding slice of a first video frame to a display device in a first video transmission period; obtains channel state information that is fed back by the display device and that is for transmitting the first video frame; and selectively enables a retry mechanism based on the channel state information.
Description
TECHNICAL FIELD

This application relates to the communication field, and in particular, to a wireless projection method and apparatus.


BACKGROUND

Wireless projection is a technology in which screen information of a terminal such as a mobile phone or a tablet is coded and compressed, then wirelessly delivered to a large-screen device such as a television or a virtual reality (virtual reality, VR) device by using a Wi-Fi technology (a wireless local area network technology created by the Wi-Fi Alliance in the Institute of Electrical and Electronic Engineers (institute of electrical and electronic engineers, IEEE) 802.11 standard), and then decoded, displayed, and output.


An existing wireless projection compression and transmission technology is mostly performed in video frames, which has disadvantages such as a high end-to-end delay and a weak anti-interference capability, and greatly limits application of the wireless projection compression and transmission technology in scenarios such as office, games, and VR. A low-delay wireless projection solution combining source coding and a wireless channel transmission technology can resolve the foregoing problems to some extent. In this solution, a terminal continuously divides each frame of video data into a plurality of data slices (slices) through source coding, and uses a scalable video coding technology on each slice to obtain a plurality of scalable bitstreams (layers, for example, one basic layer and a plurality of enhancement layers) with scaled-quality (or resolution). The terminal performs Wi-Fi transmission on the scalable bitstream of the slice in a configured video transmission period (video service period, VSP). In this case, the terminal needs to transmit one slice in a fixed time period in the VSP. If the transmission is not completed, the terminal needs to discard the slice, so that normal transmission of subsequent slices is not affected. Because scalable video coding is performed on each slice, for each slice, when Wi-Fi transmission of the basic layer succeeds but transmission of the enhancement layer fails, a bitstream having poor quality can still be decoded, to achieve capabilities of low-delay transmission and adapting to a wireless channel transmission fluctuation.


However, wireless channel interference is usually burst interference. The foregoing solution resists a channel capacity fluctuation only through scalable coding, resulting in a limited capability. Especially, when interference is strong in Wi-Fi environments, the basic layer often fails to be correctly transmitted within a time period. In this case, image tearing and freezing and image quality deterioration occur in wireless projection. This affects user experience.


SUMMARY

This application provides a wireless projection method and apparatus, to reduce image tearing and freezing and image quality deterioration occurring in wireless projection, and improve user experience.


According to a first aspect, a wireless projection method is provided. The wireless projection method is applied to a Wi-Fi communication system, and an electronic device is connected to a display device through wireless fidelity Wi-Fi. The wireless projection method includes: The electronic device transmits a scalable video coding slice of a first video frame to the display device in a first video transmission period; the electronic device obtains channel state information that is fed back by the display device and that is for transmitting any scalable video coding slice of the first video frame; and the electronic device selectively enables a retry mechanism based on the channel state information. The retry mechanism is that the electronic device transmits a first scalable video coding slice of a second video frame to the display device in a first time period in a second video transmission period, and transmits, to the display device in at least one another time period after the first time period, a portion that is of the first scalable video coding slice and that is discarded in the first time period. One another scalable video coding slice of the second video frame is transmitted in the at least one another time period. In this way, the electronic device can first transmit a scalable video coding slice of a video frame in a video transmission period, for example, transmit one scalable video coding slice in a time period in a video transmission period in an initial state. Certainly, if transmission of the scalable video coding slice is not completed in a corresponding time period, the scalable video coding slice is directly discarded. Then, the electronic device obtains the channel state information that is fed back by the display device and that is for transmitting any scalable video coding slice of the first video frame. For example, after each time period, the channel state information for transmitting the scalable video coding slice in the time period may be obtained. It may be understood that the channel state information may reflect a portion that is of the scalable video coding slice and that is discarded in the corresponding time period. For example, a channel state may be a total quantity of data packets of the scalable video coding slice transmitted in one time period and a quantity of data packets that are successfully transmitted. In this way, the electronic device may selectively enable the retry mechanism based on the channel state information. For example, if the electronic device determines, based on the channel state information, that channel quality is poor, the electronic device enables the retry mechanism; or when channel quality is good, the electronic device does not enable the retry mechanism; or when the retry mechanism is already enabled, the electronic device disables the retry mechanism when determining that channel quality is good. The retry mechanism is that the electronic device transmits one scalable video coding slice in a plurality of time periods in a subsequent video transmission period, for example, transmits a first scalable video coding slice of another video frame to a display device in a first time period in a subsequent video transmission period, and transmits, to the display device in at least one another time period after the first time period, a portion that is of the first scalable video coding slice and that is discarded in the first time period. One another scalable video coding slice of the second video frame is transmitted in the at least one another time period. In this way, when one scalable video coding slice has a discarded portion in a corresponding time period, the discarded portion may be transmitted in another time period. For example, when transmission of one or more of a basic layer and at least one enhancement layer of a scalable video coding slice of a slice is not completed in a corresponding time period, the basic layer and the at least one enhancement layer have an opportunity of being transmitted in a next time period. In this way, the display device may perform decoding with reference to the first scalable video coding slices transmitted in the two time periods, to ensure successful transmission of the scalable video coding slice as much as possible on the premise that a portion of a delay is sacrificed, reduce image tearing and freezing and image quality deterioration occurring in wireless projection, and improve user experience.


In a possible implementation, the scalable video coding slice includes one basic layer and at least one enhancement layer; and that the electronic device transmits a first scalable coding slice of a second video frame in a first time period in a second video transmission period includes that the electronic device transmits a basic layer and at least one enhancement layer of the first scalable video coding slice of the second video frame to the display device in the first time period in the second video transmission period based on transmission priorities of the basic layer and the at least one enhancement layer of the first scalable video coding slice of the second video frame. Generally, scalable video coding is performed on each slice of the first video frame to obtain the scalable video coding slice. Therefore, for each scalable video coding slice, based on the transmission priorities, when the basic layer is successfully transmitted but the enhancement layer fails to be transmitted, a bitstream having poor quality can still be decoded, to achieve capabilities of low-delay transmission and adapting to a wireless channel transmission fluctuation.


In a possible implementation, the scalable video coding slice includes the one basic layer and the at least one enhancement layer, the at least one another time period includes a second time period in the second video transmission period, and the another scalable video coding slice includes a second scalable video coding slice of the second video frame.


That the electronic device transmits, to the display device in at least one another time period after the first time period, a portion that is of the first scalable video coding slice and that is discarded in the first time period includes that the electronic device transmits, to the display device in the second time period based on the transmission priorities of the basic layer and the at least one enhancement layer of the first scalable video coding slice and transmission priorities of a basic layer and at least one enhancement layer of the second scalable video coding slice, the portion that is of the first scalable video coding slice and that is discarded and the second scalable video coding slice. The transmission priority of the basic layer of the first scalable video coding slice is higher than the transmission priority of the basic layer of the second scalable video coding slice, the transmission priority of the basic layer of the second scalable video coding slice is higher than the transmission priority of the enhancement layer of the first scalable video coding slice, and the transmission priority of the enhancement layer of the first scalable video coding slice is higher than the transmission priority of the enhancement layer of the second scalable video coding slice. In this way, data transmitted in the second time period may be transmitted in a manner of preferentially transmitting the basic layer of the first scalable video coding slice, then transmitting the basic layer of the second scalable coding slice, then transmitting the enhancement layer of the first scalable coding slice, and finally transmitting the enhancement layer of the second scalable coding slice. This preferentially ensures that the basic layer of each scalable video coding slice is transmitted successfully in sequence in the corresponding time period, and then ensures that the enhancement layer of each scalable video coding slice is transmitted successfully in sequence in the time period.


In a possible implementation, the discarded portion of the first scalable video coding slice includes the basic layer and the at least one enhancement layer of the first scalable video coding slice; that the electronic device transmits, to the display device in at least one another time period after the first time period, a portion that is of the first scalable video coding slice and that is discarded in the first time period includes that the electronic device transmits the basic layer and the at least one enhancement layer of the first scalable video coding slice to the display device in the at least one another time period after the first time period. In this solution, if both the basic layer and the at least one enhancement layer of the first scalable video coding slice are discarded in the second video transmission period, that is, (transmission is not completed), the basic layer and the at least one enhancement layer of the first scalable video coding slice may be retransmitted in the at least one another time period after the first time period.


In a possible implementation, a discarded portion of the first scalable video coding slice includes the at least one enhancement layer of the first scalable video coding slice; that the electronic device transmits, to the display device in at least one another time period after the first time period, a portion that is of the first scalable video coding slice and that is discarded in the first time period includes that the electronic device transmits the at least one enhancement layer of the first scalable video coding slice to the display device in the at least one another time period after the first time period. In this solution, if only the at least one enhancement layer of the first scalable video coding slice is discarded (transmission is not completed) in the first time period, only the at least one enhancement layer of the first scalable video coding slice may be retransmitted in the at least one another time period after the first time period.


In a possible implementation, the method further includes: after the at least one another time period, reconstructing a reference frame based on the successfully transmitted portion of the first scalable video coding slice; and generating first scalable video coding of a third video frame based on the reference frame. In this way, when the electronic device transmits the third video frame, for the first scalable video coding slice of the second video frame, reference may be made to portions that are successfully transmitted in the first time period and the at least one another time period after the first time period. In this way, formed inter-predictive frame (predictive frame, P frame for short) coding can reduce an amount of data to be coded, and help ensure that the third video frame of the electronic device can be decoded based on an actually successfully transmitted portion after being transmitted to the display device.


According to a second aspect, a wireless projection methodis provided. The wireless projection applied to a Wi-Fi communication system, and an electronic device is connected to a display device through wireless fidelity Wi-Fi. The wireless projection method includes: The display device receives a scalable video coding slice of a first video frame that is transmitted by the electronic device to the display device in a first video transmission period; and the display device feeds back, to the electronic device, channel state information for transmitting the first video frame. The electronic device selectively enables a retry mechanism based on the channel state information. The display device decodes the scalable video coding slice transmitted by the electronic device in the first video transmission period; the display device receives a first scalable video coding slice of a second video frame transmitted by the electronic device in a first time period in a second video transmission period in the retry mechanism; and the display device receives a portion that is of the first scalable video coding slice and that is discarded in the first time period and that is transmitted by the electronic device in at least one another time period after the first time period. One another scalable video coding slice of the second video frame is transmitted in the at least one another time period; and the display device decodes the first scalable video coding slice of the second video frame transmitted by the electronic device in the first time period and a portion that is of the first scalable video coding slice and that is discarded in the first time period and that is transmitted in the at least one another time period after the first time period.


In a possible implementation, the scalable video coding slice includes one basic layer and at least one enhancement layer; and that the display device receives a first scalable video coding slice of a second video frame transmitted by the electronic device in a first time period in a second video transmission period includes: The display device receives the basic layer and the at least one enhancement layer of the first scalable video coding slice of the second video frame that are transmitted by the electronic device in the first time period in the second video transmission period based on transmission priorities of the basic layer and the at least one enhancement layer of the first scalable video coding slice.


In a possible implementation, the scalable video coding slice includes the one basic layer and the at least one enhancement layer; and that the display device receives a portion that is of the first scalable video coding slice and that is discarded in the first time period and that is transmitted by the electronic device in at least one another time period after the first time period includes: The display device receives the basic layer and the at least one enhancement layer that are of the first scalable video coding slice and that are transmitted by the electronic device in the at least one another time period after the first time period.


In a possible implementation, the scalable video coding slice includes the one basic layer and the at least one enhancement layer; and that the display device receives a portion that is of the first scalable video coding slice and that is discarded in the first time period and that is transmitted by the electronic device in at least one another time period after the first time period includes: The display device receives the at least one enhancement layer of the first scalable video coding slice transmitted by the electronic device in the at least one another time period after the first time period.


According to a third aspect, a wireless projection apparatus is provided, used in a Wi-Fi communication system. The wireless projection apparatus may be an electronic device, the wireless projection apparatus may be a module or a chip in the electronic device, or the electronic device may be a chip or a system on chip, and the wireless projection apparatus includes: A transmitter is configured to transmit, a scalable video coding slice of a first video frame to a display device in a first video transmission period. A receiver is configured to obtain channel state information that is fed back by the display device and that is for transmitting the first video frame. A processor is configured to selectively enable a retry mechanism based on the channel state information obtained by the receiver. The retry mechanism is that the transmitter is further configured to: transmit a first scalable video coding slice of a second video frame to the display device in a first time period in a second video transmission period, and transmit, to the display device in at least one another time period after the first time period, a portion that is of the first scalable video coding slice and that is discarded in the first time period. One another scalable video coding slice of the second video frame is transmitted in the at least one another time period.


In a possible implementation, the scalable video coding slice includes one basic layer and at least one enhancement layer; and the transmitter is specifically configured to transmit the basic layer and the at least one enhancement layer of the first scalable video coding slice of the second video frame to the display device in the first time period in the second video transmission period based on transmission priorities of the basic layer and the at least one enhancement layer of the first scalable video coding slice of the second video frame.


In a possible implementation, the scalable video coding slice includes the one basic layer and the at least one enhancement layer, the at least one another time period includes a second time period in the second video transmission period, and the another scalable video coding slice includes a second scalable video coding slice of the second video frame. The transmitter is specifically configured to transmit, to the display device in the second time period based on the transmission priorities of the basic layer and the at least one enhancement layer of the first scalable video coding slice and transmission priorities of a basic layer and at least one enhancement layer of the second scalable video coding slice, the portion that is of the first scalable video coding slice and that is discarded and the second scalable video coding slice. The transmission priority of the basic layer of the first scalable video coding slice is higher than the transmission priority of the basic layer of the second scalable video coding slice, the transmission priority of the basic layer of the second scalable video coding slice is higher than the transmission priority of the enhancement layer of the first scalable video coding slice, and the transmission priority of the enhancement layer of the first scalable video coding slice is higher than the transmission priority of the enhancement layer of the second scalable video coding slice.


In a possible implementation, the discarded portion of the first scalable video coding slice includes the basic layer and the at least one enhancement layer of the first scalable video coding slice; and the transmitter is specifically configured to transmit the basic layer and the at least one enhancement layer of the first scalable video coding slice to the display device in the at least one another time period after the first time period.


In a possible implementation, the discarded portion of the first scalable video coding slice includes the at least one enhancement layer of the first scalable video coding slice; and the transmitter is specifically configured to transmit the at least one enhancement layer of the first scalable video coding slice to the display device in the at least one another time period after the first time period.


In a possible implementation, the processor is further configured to: after the at least one another time period, reconstruct a reference frame based on a successfully transmitted portion of the first scalable video coding slice; and generate first scalable video coding of a third video frame based on the reference frame.


According to a fourth aspect, a wireless projection apparatus is provided, used in a Wi-Fi communication system. The wireless projection apparatus may be a display device, the wireless projection apparatus may be a module or a chip in the display device, and the display device may be a chip or a system on chip, and the wireless projection apparatus includes: A receiver is configured to receive a scalable video coding slice of a first video frame that is transmitted by an electronic device to the display device in a first video transmission period. A transmitter is configured to feed back, to the electronic device, channel state information for transmitting the first video frame. The electronic device selectively enables a retry mechanism based on the channel state information. A processor is configured to decode the scalable video coding slice transmitted by the electronic device in the first video transmission period. A receiver is further configured to receive a first scalable video coding slice of a second video frame transmitted by the electronic device in a first time period in a second video transmission period in the retry mechanism; and receive a portion that is of the first scalable video coding slice and that is discarded in the first time period and that is transmitted by the electronic device in at least one another time period after the first time period. One another scalable video coding slice of the second video frame is transmitted in the at least one another time period. The processor is further configured to decode the first scalable video coding slice of the second video frame transmitted by the electronic device in the first time period and the portion that is of the first scalable video coding slice and that is discarded in the first time period and that is transmitted in the at least one another time period after the first time period.


In a possible implementation, the scalable video coding slice includes one basic layer and at least one enhancement layer; and the receiver is specifically configured to receive the basic layer and the at least one enhancement layer of the first scalable video coding slice of the second video frame that are transmitted by the electronic device in the first time period in the second video transmission period based on transmission priorities of the basic layer and the at least one enhancement layer of the first scalable video coding slice.


In a possible implementation, the scalable video coding slice includes the one basic layer and the at least one enhancement layer; and the receiver is specifically configured to receive the basic layer and the at least one enhancement layer of the first scalable video coding slice that are transmitted by the electronic device in the at least one another time period after the first time period.


In a possible implementation, the scalable video coding slice includes the one basic layer and the at least one enhancement layer; and the receiver is specifically configured to receive the at least one enhancement layer of the first scalable video coding slice transmitted by the electronic device in the at least one another time period after the first time period.


According to a fifth aspect, a computer-readable storage medium is provided. The computer-readable storage medium stores a computer program. When the computer program is run on a computer, the computer is enabled to perform the method according to any one of the foregoing aspects.


According to a sixth aspect, a computer program product including instructions is provided. The computer program product includes computer program code. When the computer program code is run on a computer, the computer is enabled to perform the method according to any one of the foregoing aspects.


According to a seventh aspect, a communication system is provided. The communication system includes the wireless projection apparatus in the third aspect and the wireless projection apparatus in the fourth aspect. In an example, the wireless projection apparatus according to the third aspect may be an electronic device, for example, a mobile phone; and the wireless projection apparatus according to the fourth aspect may be a display device, for example, a radio television set.


For technical effects brought by any design manner of the second to the seventh aspects, refer to the technical effects brought by different design manners of the first aspect. Details are not described herein again.





BRIEF DESCRIPTION OF DRAWINGS


FIG. 1 is a schematic diagram of an architecture of a communication system according to an embodiment of this application;



FIG. 2 is a schematic diagram of a functional architecture of an electronic device according to an embodiment of this application;



FIG. 3 is a schematic diagram of a pipeline structure of a wireless projection method according to an embodiment of this application;



FIG. 4 is a schematic diagram of a structure of a wireless projection apparatus according to an embodiment of this application;



FIG. 5 is a schematic flowchart of a wireless projection method according to an embodiment of this application;



FIG. 6 is a schematic diagram of a pipeline structure of a wireless projection method according to another embodiment of this application;



FIG. 7 is a schematic diagram of a pipeline structure of a wireless projection method according to still another embodiment of this application;



FIG. 8 is a schematic diagram of a structure of a wireless projection apparatus according to another embodiment of this application; and



FIG. 9 is a schematic diagram of a structure of a wireless projection apparatus according to still another embodiment of this application.





DESCRIPTION OF EMBODIMENTS

The terms “first” and “second” mentioned below are merely intended for a purpose of description, and shall not be understood as an indication or implication of relative importance or implicit indication of the number of indicated technical features. Therefore, a feature limited by “first” or “second” may explicitly or implicitly include one or more features. In the description of embodiments of this application, unless otherwise specified, “a plurality of” means two or more than two.


Currently, a terminal can perform coding and compression on screen information of the terminal by using a wireless projection technology, then wirelessly deliver the screen information to a large-screen device such as a television or a VR device by using a Wi-Fi technology, and then decode, display, and output the screen information. A low-delay wireless projection solution combining source coding and a wireless channel transmission technology can reduce disadvantages such as a high end-to-end delay and a weak anti-interference capability to some extent. A main principle of the low-delay wireless projection solution combining the source coding and the wireless channel transmission technology is as follows: The terminal continuously divides each frame of video data into a plurality of slices based on the source coding, and uses a scalable video coding technology on each slice to obtain a plurality of layers with quality (or resolution) scaled. The terminal performs Wi-Fi transmission on a scalable bitstream of the slice in a configured VSP. In this case, the terminal needs to transmit one slice in a time period in the VSP. If the transmission is not completed, the terminal needs to discard the slice, so that normal transmission of subsequent slices is not affected. Because scalable video coding is performed on each slice, for each slice, when Wi-Fi transmission of a basic layer succeeds but transmission of an enhancement layer fails, a bitstream having poor quality can still be decoded, to achieve capabilities of low-delay transmission and adapting to a wireless channel transmission fluctuation. However, wireless channel interference is usually burst interference. The foregoing solution resists a channel capacity fluctuation only by using scalable coding, resulting in a limited capability. Especially, when interference is strong in Wi-Fi environments, the basic layer often fails to be correctly transmitted within a time period. In this case, image tearing and freezing and image quality deterioration occur in wireless projection. This affects user experience.


Embodiments of this application provide a wireless projection method. The wireless projection method may be applied to a Wi-Fi communication system. In the Wi-Fi communication system, an electronic device is connected to a display device through wireless fidelity Wi-Fi. The electronic device wirelessly delivers screen information of the electronic device to the display device, and then decodes, displays, and outputs the screen information. By using the wireless projection method provided in embodiments, the electronic device transmits a scalable video coding slice of a first video frame to the display device in a first video transmission period. One scalable video coding slice of the first video frame is correspondingly transmitted in one time period in the first video transmission period. For example, the electronic device performs scalable video coding on a plurality of data slices of the first video frame in sequence to generate a plurality of scalable video coding slices. For example, the electronic device codes a plurality of data slices of the first video frame into a plurality of the scalable video coding slices. The one scalable video coding slice may include a basic layer and at least one enhancement layer; and then the electronic device transmits the scalable video coding slice of the video frame on each time period in the video transmission period. The electronic device transmits one scalable video coding slice in a time period in a video transmission period in an initial state. Certainly, if transmission of the scalable video coding slice is not completed in a corresponding time period, the scalable video coding slice is directly discarded. Then, the electronic device obtains channel state information that is fed back by the display device and that is for transmitting any scalable video coding slice of the first video frame. For example, after each time period, the channel state information for transmitting the scalable video coding slice in the time period may be obtained. It may be understood that the channel state information may reflect a portion that is of the scalable video coding slice and that is discarded in the corresponding time period. For example, the channel state information may be a total quantity of data packets of the scalable video coding slice transmitted in one time period and a quantity of data packets that are successfully transmitted. In this way, the electronic device may selectively enable the retry mechanism based on the channel state information. For example, if the electronic device determines, based on the channel state information, that channel quality is poor, the electronic device enables the retry mechanism; or when channel quality is good, the electronic device does not enable the retry mechanism; or when the retry mechanism is already enabled, the electronic device disables the retry mechanism when determining that channel quality is good. The retry mechanism is that the electronic device transmits one scalable video coding slice in a plurality of time periods in a subsequent video transmission period, for example, transmits a first scalable video coding slice of another video frame to a display device in a first time period in a subsequent video transmission period, and transmits, to the display device in the at least one another time period after the first time period, a portion that is of the first scalable video coding slice and that is discarded in the first time period. One another scalable video coding slice of a second video frame is transmitted in the at least one another time period. In this way, when one scalable video coding slice has a discarded portion in a corresponding time period, the discarded portion may be transmitted in another time period. For example, when one or more of a basic layer and at least one enhancement layer of the scalable video coding slice of a slice is discarded (transmission is not completed) in a corresponding time period, the basic layer and the at least one enhancement layer have an opportunity of being transmitted in a next time period. In this way, the display device may perform decoding with reference to the first scalable video coding slices transmitted in the two time periods, to ensure successful transmission of the scalable video coding slice as much as possible on the premise that a portion of a delay is sacrificed, reduce image tearing and freezing and image quality deterioration occurring in wireless projection, and improve user experience. In addition, after the at least one another time period, a reference frame is reconstructed based on the successfully transmitted portion of the first scalable video coding slice, and the first scalable video coding of a third video frame is generated based on the reference frame. In this way, when the electronic device transmits the third video frame, for the first scalable video coding slice of the second video frame, reference may be made to portions that are successfully transmitted in the first time period and the at least one another time period after the first time period. In this way, formed inter-predictive frame (predictive frame, P frame for short) coding can reduce an amount of data to be coded, and help ensure that the third video frame of the electronic device can be decoded based on an actually successfully transmitted portion after being transmitted to the display device.


The following describes in detail a wireless projection method and an apparatus provided in embodiments of this application with reference to the accompanying drawings.



FIG. 1 is a schematic diagram of a communication system to which a wireless projection method is applied according to an embodiment of this application. As shown in FIG. 1, the communication system includes an electronic device 100 and a display device 200. The electronic device 100 may be connected to the display device 200 through a wireless network, for example, through a Wi-Fi network.


In some embodiments, the electronic device 100 sends content displayed on a screen to the display device 200 through the Wi-Fi network, and the display device 200 displays the content. In other words, in a wireless projection process, display content of the electronic device 100 is the same as that of the display device 200. For details, refer to a schematic diagram of a principle of wireless projection in FIG. 2. The electronic device 100 may have the following functions. First, the electronic device 100 can display a video frame as a screen image, and can capture the displayed screen image as a video frame for frame buffering. In addition, the electronic device 100 can provide frame-level timing, to be specific, divide a video frame into a plurality of slices in sequence based on a VSP, and each time period in the VSP corresponds to one slice. The electronic device 100 can perform scalable video coding (scalable video coding, SVC) (coding 1 to coding 8 as shown in FIG. 3) on a slice, for example, split a slice into a plurality of resolution, quality, and frame rate layers. The electronic device 100 may adjust a coding bit rate of a slice based on a channel state. In other words, the slice is divided in terms of time, space, and quality, and multi-layer bitstreams (including a basic layer and an enhancement layer) are output. Data of the basic layer may enable the display device 200 to decode basic video content completely and normally, but a video image obtained by using the data of the basic layer may have a low frame rate, low resolution, or low quality. When a channel is limited or a channel environment is complex, it can be ensured that a decoder can receive a smooth video image that can be watched. When the channel environment is good or channel resources are abundant, data of the enhancement layer may be transmitted to improve a frame rate, resolution, or video quality. The enhancement layer may be multi-layer coded, which means that in a range of a total bit rate of a video bitstream, a higher received bit rate indicates better video quality. The electronic device 100 may further negotiate a transmission period with the display device 200, for example, negotiate a VSP. Specifically, the electronic device 100 sends, to the display device 200 through a Wi-Fi unicast frame, a negotiation parameter that carries the VSP. In addition, the electronic device 100 may further periodically perform receiving and sending synchronization with the display device 200, to ensure VSP synchronization at both ends. For example, a timing synchronization function (timing synchronization function, TSF) mechanism is used to ensure time synchronization between the electronic device 100 and the display device 200. Specifically, the electronic device 100 needs to periodically send a beacon (beacon) frame, and the display device 200 needs to be periodically woken up to receive the beacon frame. For example, the display device 200 initializes a TSF timer, uses the beacon frame to notify another electronic device 100 of local time of the display device 200, and sets a time stamp for sending the beacon frame, to implement receiving and sending synchronization. The electronic device 100 may further transmit scalable video coding data of a slice within a corresponding time period in the VSP (coding 1 to coding 3 are respectively transmitted in time periods VSP1 to VSP3 as shown in FIG. 3) through transmission control, and guides, with reference to the channel state, coding by the electronic device 100 on the slice. A function of the display device 200 is as follows: The display device 200 can negotiate the transmission period with the electronic device 100, receive the scalable video coding data of the slice in a negotiated VSP through transmission control, and perform data packet reordering on the scalable video coding data of the slice (for example, perform data packet reordering based on a Wi-Fi data packet sequence number (seq), and discard a packet that fails to be transmitted). The display device 200 can perform sending and receiving synchronization with the electronic device 100 based on the foregoing sending and receiving synchronization mechanism. The display device 200 can also perform slice-level scalable video decoding on reordered data packets with reference to the channel state (as shown in FIG. 3, received coding starts to be decoded in VSP2), perform data merging, and finally display a video frame on the screen. In this embodiment of this application, the electronic device 100 may further negotiate at least one another time period (a VSP retry, or referred to as a video transmission period in a retry mechanism) after the first time period with the display device 200. In addition, the electronic device 100 may further reconstruct, after the at least one another time period, a reference frame based on a successfully transmitted portion of a scalable video coding slice. It may be understood that the reference frame is a reference object based on which P-frame coding is used for a subsequent transmitted video frame of the second video frame. For example, usually, when P-frame coding is performed on a 1st slice of a subsequent third video frame with reference to scalable video coding of a 1st slice of the second video frame, one or more layers of the scalable video coding of the 1st slice of the second video frame may fail to be completely transmitted in corresponding time periods. In this case, scalable video coding is performed on the 1st slice of the third video frame with reference to the scalable video coding of the 1st slice of the second video frame. This may result in that the display device 200 cannot correctly perform decoding. Therefore, in this embodiment of this application, the electronic device 100 reconstructs the video frame with reference to a successfully transmitted portion of the scalable video coding of the 1st slice of the second video frame, and uses the reconstructed video frame as a reference object for performing scalable video coding on the 1st slice of the third video frame.


For example, the electronic device 100 includes a terminal device that has an image display function, such as a mobile phone, a tablet computer, a laptop computer, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA), an in-vehicle device, a user terminal (user terminal, UT), a terminal device (user device, UD), user equipment (user equipment, UE), or an artificial intelligence (artificial intelligence, AI) device. A specific type of the electronic device 100 is not limited in this embodiment of this application.


The display device 200 includes, for example, a terminal device that can implement a large-screen display function, such as a laptop computer, a large-screen display device (such as a smart screen), a projection device, an AI device, or a tablet computer. A specific type of the display device 200 is not limited in this embodiment of this application.


Optionally, the electronic device 100 and the display device 200 in this embodiment of this application may be implemented by different devices. For example, the electronic device 100 and the display device 200 in this embodiment of this application may be implemented by using a wireless projection apparatus in FIG. 4. As shown in FIG. 4, the wireless projection apparatus 200 includes at least one processor 201, a communication line 202, a memory 203, and at least one communication interface 204. The memory 203 may alternatively be included in the processor 201.


The processor 201 may be a central processing unit (central processing unit, CPU), or may be another general-purpose processor, a digital signal processor (digital signal processor, DSP), an application-specific integrated circuit (application-specific integrated circuit, ASIC), a field programmable gate array (field programmable gate array, FPGA), or another programmable logic device, a discrete gate or a transistor logic device, a discrete hardware component, or the like. The general-purpose processor may be a microprocessor, or the processor may be any conventional processor or the like.


The communication line 202 may be a circuit connecting the foregoing components to each other and transmitting information between the foregoing components.


The communication interface 204 is configured to communicate with another device. In embodiments of this application, the communication interface 204 may be a module, a circuit, a bus, an interface, a transceiver, or another apparatus that can implement a communication function, and is configured to communicate with another device. Optionally, when the communication interface 204 is the transceiver, the transceiver may be an independently disposed transmitter, and the transmitter may be configured to send information to another device. Alternatively, the transceiver may be an independently disposed receiver, and is configured to receive information from another device. Alternatively, the transceiver may be a component integrating functions of sending and receiving information. A specific implementation of the transceiver is not limited in embodiments of this application.


The memory 203 may be a volatile memory or a nonvolatile memory, or may include both a volatile memory and a nonvolatile memory. The nonvolatile memory may be a read-only memory (read-only memory, ROM), a programmable read-only memory (programmable ROM, PROM), an erasable programmable read-only memory (erasable PROM, EPROM), an electrically erasable programmable read-only memory (electrically EPROM, EEPROM), or a flash memory. The volatile memory may be a random access memory (random access memory, RAM), used as an external cache. By way of example but not limitation, many forms of RAM are available, for example, static random access memory (static RAM, SRAM), dynamic random access memory (dynamic RAM, DRAM), synchronous dynamic random access memory (synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (double data rate SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (enhanced SDRAM, ESDRAM), synchlink dynamic random access memory (synchlink DRAM, SLDRAM) and direct rambus random access memory (direct rambus RAM, DR RAM), or other magnetic storage devices, or any other medium that can be for carrying or storing expected program code in a form of instructions or a data structure and that can be accessed by a computer, but is not limited thereto. The memory 203 may exist independently, and be connected to the processor 201 by using the communication line 202. The memory 203 may alternatively be integrated with the processor 201.


The memory 203 is configured to store computer-executable instructions for implementing the solutions of this application, and the processor 201 controls the execution. The processor 201 is configured to execute the computer-executable instructions stored in the memory 203, to implement a determining parameter determining method provided in the following embodiments of this application.


It should be noted that the memory described in this specification aims to include but is not limited to these memories and any memory of another proper type.


Optionally, the computer-executable instructions in embodiments of this application may also be referred to as application program code, instructions, a computer program, or another name. This is not specifically limited in embodiments of this application.


During specific implementation, in an embodiment, the processor 201 may include one or more CPUs, for example, a CPU 0 and a CPU 1 in FIG. 4.


During specific implementation, in an embodiment, the wireless projection apparatus 200 may include a plurality of processors, for example, the processor 201 and a processor 207 in FIG. 4. Each of the processors may be a single-core (single-CPU) processor, or may be a multi-core (multi-CPU) processor. The processor herein may be one or more devices, circuits, and/or processing cores configured to process data (for example, computer program instructions).


During specific implementation, in an embodiment, the wireless projection apparatus 200 may further include an output device 205 and an input device 206. The output device 205 communicates with the processor 201, and may display information in a plurality of manners. For example, the output device 205 may be a liquid crystal display (liquid crystal display, LCD), a light emitting diode (light emitting diode, LED) display device, a cathode ray tube (cathode ray tube, CRT) display device, or a projector (projector). The input device 206 communicates with the processor 201, and may receive an input from a user in a plurality of manners. For example, the input device 206 may be a mouse, a keyboard, a touchscreen device, or a sensor device.


It should be noted that the wireless projection apparatus 200 may be a general-purpose device or a dedicated device. A type of the apparatus is not limited in this embodiment of this application. A structure of the wireless projection apparatus 200 in FIG. 4 does not constitute a limitation on the wireless projection apparatus. An actual wireless projection apparatus may include more or fewer components than those in the figure, or combine some components, or have different component arrangement. The components in the figure may be implemented by hardware, software, or a combination of software and hardware.


The following describes in detail, with reference to the accompanying drawings, the wireless projection method provided in embodiments of this application by using an example in which an electronic device is a mobile phone and a display device is a radio television set. FIG. 5 is a schematic flowchart of a wireless projection method according to an embodiment of this application. As shown in FIG. 5, the method may include the following steps.


S501: A mobile phone establishes a Wi-Fi connection to a radio television set.


The mobile phone and the radio television set may establish the Wi-Fi connection by using a Wi-Fi P2P protocol. To establish the Wi-Fi connection between terminals, each terminal needs to have a transmission capability. In addition, the terminals need to know connection information about each other. The connection information may be a device identifier of the terminal, for example, an internet protocol (internet protocol, IP) address, a port number, or an account logged in on the terminal. The account logged in on the terminal may be an account provided by an operator for a user. The account logged in on the terminal may alternatively be an application account or the like. The transmission capability that the terminal has may be a near field communication capability, or may be a long-distance communication capability. To be specific, a wireless communication protocol used for establishing a connection between terminals, for example, a mobile phone and a radio television set, may be a near field communication protocol such as a Wi-Fi protocol, a Bluetooth protocol, or an NFC protocol, or may be a cellular network protocol. For example, the user may use the mobile phone to touch an NFC tag of the radio television set, and the mobile phone reads connection information stored in the NFC tag. For example, the connection information includes an IP address of the radio television set. Then, the mobile phone may establish, based on the IP address of the radio television set, the connection to the radio television set by using another protocol, for example, a Wi-Fi protocol For another example, Bluetooth functions and Wi-Fi functions are enabled on both the mobile phone and the radio television set. The mobile phone may broadcast a Bluetooth signal, to discover a surrounding terminal. For example, the mobile phone may display a discovered device list. The discovered device list may include an identifier of a device discovered by the mobile phone, for example, include an identifier of the radio television set. In a process of discovering a device, the mobile phone may also exchange the connection information, for example, the IP address, with a discovered device. Then, after the mobile phone receives an operation of selecting the identifier of the radio television set from the device list displayed by the user, the mobile phone may establish, based on the IP address of the radio television set, the connection to the radio television set by using the Wi-Fi protocol.


S502: The mobile phone negotiates a projection parameter with the radio television set.


For example, the mobile phone may negotiate the projection parameter by sending a Wi-Fi unicast frame to the radio television set. Specifically, the Wi-Fi unicast frame may carry a VSP, a VSP retry, and delay (delay) time of projection starting. A time period in the VSP is mainly for first transmission of a scalable video coding slice of a data slice in a video frame, and a time period in the VSP retry is mainly for re-transmission of a portion that is of the scalable video coding slice of the data slice in the video frame and that is discarded in the time period in the VSP. The time period in the VSP retry has a same length as the time period in the VSP. The delay time of projection starting is for projection start time agreed by both ends.


S503: The mobile phone transmits a scalable video coding slice of a first video frame to the radio television set in a first video transmission period.


One scalable video coding slice of the first video frame may be correspondingly transmitted in one time period in the first video transmission period. First, the mobile phone needs to perform scalable video coding on a plurality of data slices of the video frame in sequence to generate a plurality of scalable video coding slices. Refer to FIG. 3. The mobile phone performs scalable video coding on a first data slice of the first video frame to generate the first scalable video coding slice (for example, coding 1), and the mobile phone performs scalable video coding on a second data slice of the first video frame to generate a second scalable video coding slice (for example, coding 2). In this way, a plurality of data slices of the first video frame are coded into a plurality of scalable video coding slices in sequence. For a scalable video coding scheme of another video frame, reference may be made to descriptions of the first video frame. It should be noted that scalable video coding for any slice needs to be completed before a time period in a video transmission period in which the slice is transmitted arrives. For example, if the first video transmission period of the first video frame includes four time periods, namely, VSP1 to VSP4, coding 1 needs to be completed in VSP0 before VSP1, and coding 2 needs to be completed in VSP1 before VSP2. In this way, the coding 1 can be transmitted in corresponding VSP1 and the coding 2 can be transmitted in VSP2. Specifically, for the first video frame transmitted by the mobile phone for a first time, because the first video frame can be transmitted only after the coding 1 is completed, data transmission is not performed before VSP1 (for example, VSP0). For the first video frame in a transmission process of the mobile phone, for example, there is another video frame before the first video frame, a scalable video coding slice of a data slice of another video frame before the first video frame may be transmitted in VSP0. In this embodiment, if VSP0 is empty, it only indicates that the data slice of the first video frame is not transmitted. It may be understood that, after the scalable video coding is performed on the data slice, the coding 1 may include layers of different quality, resolution, or frame speeds, for example, a basic layer, a first enhancement layer, a second enhancement layer, and the like. Refer to FIG. 3. The mobile phone transmits the generated coding 1 in VSP1 after VSP0. It should be noted that, in VSP1, the mobile phone may perform QoS (quality of service, quality of service) control. For example, because the coding 1 includes the basic layer, the first enhancement layer, the second enhancement layer, and the like, the basic layer and the at least one enhancement layer may be transmitted in sequence based on transmission priorities of the layers.


S504: The mobile phone obtains channel state information that is fed back by the radio television set and that is for transmitting the first video frame.


The mobile phone transmits the coding 1, the coding 2, ..., and the like in sequence in the time periods VSP1, VSP2, ..., and the like in FIG. 3. The radio television set may feed back, to the mobile phone after each time period, channel state information for transmitting a corresponding scalable video coding slice, for example, feed back, to the mobile phone after VSP1, channel state information for transmitting the coding 1. The channel state information may include a total quantity of data packets corresponding to coding of each slice and a quantity of data packets that are successfully transmitted in a corresponding time period, for example, a total quantity of data packets of the coding 1 and a quantity of data packets (excluding a data packet that is successfully transmitted after a data packet that fails to be transmitted) that are of the coding 1 and that are successfully transmitted in VSP1. The mobile phone and the radio television set may use a block acknowledgement (block acknowledgement, block ACK) mechanism to determine whether a data packet is successfully transmitted. For example, after receiving a series of data packets sent by the mobile phone, the radio television set correspondingly feeds back a block ACK, and the mobile phone determines, by using a bitmap (bitmap) in the block ACK, the quantity of data packets that are successfully transmitted. In an optional solution, the radio television set may alternatively transmit feedback information about receiving of the slice for the slice to the mobile phone, and the mobile phone determines the channel state information based on the feedback information about receiving of the slice. The electronic device may selectively enable a retry mechanism based on the channel state information. For example, the electronic device may determine channel quality based on the channel state information. When the channel quality is lower than a quality threshold, the electronic device determines that the channel quality is poor, and enables the retry mechanism; or when the channel quality is higher than or equal to a quality threshold, the electronic device determines that the channel state is good, and does not enable the retry mechanism; or when the retry mechanism is already enabled, the electronic device disables the retry mechanism when determining that the channel state is good. The quality threshold may be a quantity threshold set for successfully transmitted data packets, or a reference threshold set for a proportion of a quantity of successfully transmitted data packets in a total quantity of data packets corresponding to coding. For example, when the channel state is poor, the quantity of successfully transmitted data packets is less than a specific quantity threshold, or the proportion of the quantity of successfully transmitted data packets is less than the reference threshold. When the channel state is good, the quantity of successfully transmitted data packets is greater than or equal to the specific quantity threshold, or the proportion of the quantity of successfully transmitted data packets is greater than or equal to the reference threshold. The retry mechanism is to perform step S506.


S505: The radio television set decodes the scalable video coding slice transmitted by the mobile phone in the first video transmission period.


Specifically, after the coding 1 is transmitted to the radio television set in VSP1, the radio television set starts to decode, after VSP1 (for example, in VSP2), the scalable video coding slice received in VSP1.


S506: The mobile phone transmits the first scalable video coding slice of a second video frame to the radio television set in a first time period in a second video transmission period, and transmits, to the radio television set in at least one another time period after the first time period, a portion that is of the first scalable video coding slice and that is discarded in the first time period. One another scalable video coding slice of the second video frame is transmitted in at least one another time period.


As shown in FIG. 6, the second video frame is transmitted in the second video transmission period (including time periods, namely, VSP5, VSP6, VSP7, and VSP8). With reference to the foregoing descriptions, the mobile phone transmits coding 5, coding 6, ..., and the like in sequence in the time periods VSP5, VSP6, ..., and the like in FIG. 6. In VSP5, the mobile phone may perform QoS (quality of service, quality of service) control. For example, because the coding 5 includes a basic layer, a first enhancement layer, a second enhancement layer, and the like, the basic layer and the at least one enhancement layer may be transmitted in sequence based on transmission priorities of the layers. Specifically, if transmission of the coding 5 is not completed in VSP5 in step S504, that is, there is a portion of the coding 5 that is discarded in VSP5, the portion of the coding 5 whose transmission is not completed is retransmitted in a time period VSP1 re1 (retry1, retransmission for a first time) after VSP5. Specifically, the coding 5 includes coded data of a plurality of layers, for example, the basic layer and the at least one enhancement layer. In step S506, if both the basic layer and the at least one enhancement layer of the coding 5 are discarded in VSP5 (that is, transmission of both the basic layer and the at least one enhancement layer is not completed), the basic layer and the at least one enhancement layer of the coding 5 are transmitted in VSP1 re1. If transmission of the basic layer of the coding 5 is completed in VSP5 and the at least one enhancement layer is discarded in VSP5, the at least one enhancement layer of the coding 5 is transmitted in VSP1 re1. Before step S505, the VSP retry may not be enabled by default. To be specific, for each video frame, the mobile phone transmits the coding 1, the coding 2, ..., and the like in sequence in the time periods VSP1, VSP2, ..., and the like with reference to FIG. 3. After the VSP retry is enabled based on the channel state information, for a subsequent video frame, for example, the second video frame in FIG. 6, the coding 5, the coding 6, ..., and the like are transmitted in sequence in the time periods VSP5, VSP6, ..., and the like, and portions that are of the coding 5, the coding 6, ..., and the like and that are discarded in corresponding time periods VSP5, VSP6, ..., and the like are retransmitted in VSP1 re1, VSP2 re2, ..., and the like. In addition, based on a negotiation result in step S502, the coding 5, the coding 6, ..., and the like may alternatively be retransmitted for a plurality of times. For example, if the coding 5 still has a discarded portion (the transmission is not completed) after being transmitted in VSP1 and VSP1 re1, the coding 5 may be retransmitted in a time period VSP1 re2 following VSP1 re1. With reference to FIG. 6, two VSPs are included. The VSP (VSP5, VSP6, ..., and the like) is mainly used to transmit the coding 5, the coding 6, ..., and the like in sequence. The portions that are of the coding 5, the coding 6, ..., and the like and that are discarded in corresponding time periods VSP5, VSP6, ..., and the like are retransmitted in the VSP retry (VSP1 re1, VSP2 re2, ..., and the like), and in the time period (VSP7/VSP2 re1) after VSP1 re1, the radio television set decodes the received coding 5 to decoding 5. In this case, the feedback of the channel state information is delayed for one VSP (feedback of a channel state is delayed after VSP6). In addition, the electronic device may re-determine, based on the channel state information, whether to enable the retry mechanism in a subsequent video transmission period. A quantity of VSP retries may alternatively be configured to be more. For example, in FIG. 7, the quantity of VSP retries is configured to be 2. It may be understood that a larger quantity of VSP retries indicates a larger opportunity of full transmission of the coding 5, and indicates a stronger anti-instantaneous interference capability, and more coding buffers (buffers) are needed. After the VSP retry is enabled, QoS flow control is jointly performed on different priorities of different layers of a slice. For example, when a quantity of VSP retries is 1, there are two VSP (a VSP and a VSP retry) transmission periods at a specific moment, corresponding to a VSP retry period of a previous slice and a VSP period of a current slice. In the VSP6 period, there is an opportunity to transmit data that is of the coding 5 and that is not successfully transmitted in VSP5. In an example, in VSP6, a transmission priority of a basic layer of the first scalable video coding slice (the coding 5) of the second video frame is higher than a transmission priority of a basic layer of the second scalable video coding slice (the coding 6), the transmission priority of the basic layer of the second scalable video coding slice (the coding 6) is higher than a transmission priority of an enhancement layer of the first scalable video coding slice (the coding 5), and the transmission priority of the enhancement layer of the first scalable video coding slice (the coding 5) is higher than a transmission priority of an enhancement layer of the second scalable video coding slice (the coding 6). Specifically, it is assumed that, in VSP5, a basic layer 1 of the coding 5 is transmitted successfully, and an enhancement layer 2 and an enhancement layer 3 of the coding 5 are not transmitted successfully. In VSP6, the transmission priorities of the layers in descending order are: a transmission priority of the basic layer 1 of the coding 5, a transmission priority of a basic layer 1 of the coding 6, a transmission priority of the enhancement layer 2 of the coding 5, a transmission priority of an enhancement layer 2 of the coding 6, a transmission priority of the enhancement layer 3 of the coding 5, a transmission priority of an enhancement layer 3 of the coding 6, ..., and the like.


In addition, after the at least one another time period, a reference frame is reconstructed based on a successfully transmitted portion of the first scalable video coding slice, and first scalable video coding of a third video frame is generated based on the reference frame. Refer to FIG. 6. After VSP6 (VSP1 re1), the mobile phone reconstructs a reference frame 1 based on the successfully transmitted portion of the coding 5, and performs, in VSP8 based on the reference frame 1, scalable video coding on a first data slice of the third video frame to generate coding 9. The third video frame is located after the second video frame. It may be understood that the reference frame 1 is a reference object based on which P-frame coding is used for the subsequent transmitted third video frame of the second video frame. For example, usually, when P-frame coding is performed on coding 9 of the subsequent third video frame with reference to the coding 5 of the second video frame, one or more layers of the coding 5 may fail to be completely transmitted in corresponding VSP5 and VSP6. In this case, the coding 9 of the third video frame is generated with reference to the coding 5. This may result in that the radio television set cannot correctly perform decoding. In this embodiment of this application, dynamic reference frame reconstruction is designed. The mobile phone reconstructs the reference frame 1 with reference to the successfully transmitted portion of the coding 5 of the second video frame, and uses the reconstructed video frame 1 as a reference object for generating the coding 9 of the third video frame. For example, the mobile phone reconstructs the reference frame with reference to only a basic layer that is successfully transmitted. As shown in FIG. 6, when the quantity of VSP retries is 1, the reference frame 1 is reconstructed for a delay of one VSP (reconstructed after VSP6/VSP1 re1). In this case, that P-frame coding is subsequently performed on the subsequent coding 9 based on a dynamically reconstructed reference frame 5 is not affected. In combination with bit rate control of the mobile phone, an amount of data to be coded can be significantly reduced, and the anti-interference capability during transmission can be enhanced. FIG. 6 and FIG. 7 are both described by using an example in which one video frame is divided into four slices. However, as shown in FIG. 7, when the quantity of VSP retries is 2 or greater than 2, because transmission of the coding 5 is completed and the channel state information is fed back in VSP7, the mobile phone cannot generate a reference frame of the coding 9 in time. In this case, the mobile phone may perform inter-frame reference, for example, generate the reference frame 1 in VSP11, to refer to the reference frame 1 when coding 13 is generated in a next fourth video frame. Specifically, in this embodiment of this application, a quantity of divided slices of each video frame and the quantity of VSP retries are not limited. Specifically, the quantity of slices and the quantity of VSP retries may be dynamically adjusted based on the channel state information and negotiation in step S502 with consideration of a delay and anti-interference performance.


S507: The radio television set decodes the first scalable video coding slice of the second video frame transmitted by the mobile phone in the first time period and a portion that is of the first scalable video coding slice and that is discarded in the first time period and that is transmitted in the at least one another time period after the first time period.


Correspondingly, before step S507, the radio television set receives the first scalable video coding slice of the second video frame transmitted by the mobile phone in the first time period in the second video transmission period, and receives the portion that is of the first scalable video coding slice and that is discarded in the first time period and that is transmitted by the mobile phone in the at least one another time period after the first time period. One another scalable video coding slice of the second video frame is transmitted in at least one another time period. In addition, it should be noted that the scalable video coding slice includes one basic layer and at least one enhancement layer. Specifically, the radio television set receives a basic layer and at least one enhancement layer of the first scalable video coding slice of the second video frame that are transmitted by the mobile phone in the first time period in the second video transmission period based on transmission priorities of the basic layer and the at least one enhancement layer of the first scalable video coding slice. In addition, when transmission of the basic layer and the at least one enhancement layer of the first scalable video coding slice of the second video frame is not completed in the first time period, the mobile phone receives the basic layer and the at least one enhancement layer of the first scalable video coding slice transmitted by the radio television set in the at least one another time period after the first time period. When transmission of the at least one enhancement layer of the first scalable video coding slice of the second video is not completed in the first time period, the mobile phone receives the at least one enhancement layer of the first scalable video coding slice transmitted by the radio television set in the at least one another time period after the first time period. Refer to FIG. 6. The radio television set respectively receives, in VSP5, the coding 1 transmitted by the mobile phone, and receives, in VSP6, the portion that is discarded when the mobile phone transmits the coding 5 in VSP5, and decodes the coding 1 received in VSP5 and VSP6 and displays the coding 1 on a screen.


In this way, in the foregoing solution, the mobile phone can first transmit a scalable video coding slice of a video frame in a video transmission period, for example, transmit one scalable video coding slice in a time period in a video transmission period in an initial state. Certainly, if transmission of the scalable video coding slice is not completed in a corresponding time period, the scalable video coding slice is directly discarded. Then, the mobile phone obtains the channel state information that is fed back by the radio television set and that is for transmitting any scalable video coding slice of the first video frame. For example, after each time period, the channel state information for transmitting the scalable video coding slice in the time period may be obtained. It may be understood that the channel state information may reflect a portion that is of the scalable video coding slice and that is discarded in the corresponding time period. For example, the channel state may be a total quantity of data packets of the scalable video coding slice transmitted in one time period and a quantity of data packets that are successfully transmitted. In this way, the mobile phone may selectively enable the retry mechanism based on the channel state information. For example, if the mobile phone determines, based on the channel state information, that the channel quality is poor, the mobile phone enables the retry mechanism; or when the channel quality is good, the mobile phone does not enable the retry mechanism; or when the retry mechanism is already enabled, the mobile phone disables the retry mechanism when determining that the channel quality is good. The retry mechanism is that the mobile phone transmits one scalable video coding slice in a plurality of time periods in a subsequent video transmission period, for example, transmits a first scalable video coding slice of another video frame to the display device in a first time period in a subsequent video transmission period, and transmits, to the display device in at least one another time period after the first time period, a portion that is of the first scalable video coding slice and that is discarded in the first time period. One another scalable video coding slice of the second video frame is transmitted in the at least one another time period. In this way, when one scalable video coding slice has a discarded portion in a corresponding time period, the discarded portion may be transmitted in another time period. For example, when transmission of one or more of a basic layer and at least one enhancement layer of a scalable video coding slice of a slice is not completed in a corresponding time period, the basic layer and the at least one enhancement layer have an opportunity of being transmitted in a next time period. In this way, the radio television set may perform decoding with reference to the first scalable video coding slices transmitted in the two time periods, to ensure successful transmission of the scalable video coding slice as much as possible on the premise that a portion of a delay is sacrificed, reduce image tearing and freezing and image quality deterioration occurring in wireless projection, and improve user experience. In addition, after the at least one another time period, a reference frame is reconstructed based on the successfully transmitted portion of the first scalable video coding slice, and the first scalable video coding of the third video frame is generated based on the reference frame. In this way, when the electronic device transmits the third video frame, for the first scalable video coding slice of the second video frame, reference may be made to portions that are successfully transmitted in the first time period and the at least one another time period after the first time period. In this way, formed inter-predictive frame (predictive frame, P frame for short) coding can reduce an amount of data to be coded, and help ensure that the third video frame of the electronic device can be decoded based on an actually successfully transmitted portion after being transmitted to the display device.


It may be understood that, to implement the foregoing functions, the mobile phone and the radio television set include corresponding hardware structures and/or software modules for performing the functions. A person skilled in the art should be easily aware that, in combination with units and algorithm operations of the examples described in embodiments disclosed in this specification, this application can be implemented by hardware or a combination of hardware and computer software. Whether a function is performed by hardware or hardware driven by computer software depends on particular applications and design constraints of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of this application.


In embodiments of this application, the mobile phone may be divided into functional modules based on the foregoing method examples. For example, each functional module may be divided based on each function, or two or more functions may be integrated into one processing module. The integrated module may be implemented in a form of hardware, or may be implemented in a form of a software functional module. It should be noted that, in embodiments of this application, module division is an example, and is merely a logical function division. In actual implementation, another division manner may be used.


For example, when each functional module is divided in an integrated manner, FIG. 8 is a schematic diagram of a structure of a wireless projection apparatus 80. The wireless projection apparatus 80 may be a chip or a system on chip in the foregoing mobile phone, or another combined device, component, or the like that can implement a function of the foregoing mobile phone. The wireless projection apparatus 80 may be configured to perform the function of the mobile phone in the foregoing embodiments.


In a possible implementation, the wireless projection apparatus 80 in FIG. 8 includes a sending unit 801, a processing unit 802, and a receiving unit 803. The sending unit 801 is configured to transmit a scalable video coding slice of a first video frame to a display device in a first video transmission period. The receiving unit 803 is configured to obtain channel state information that is fed back by the display device and that is for transmitting the first video frame. The processing unit 802 is configured to selectively enable a retry mechanism based on the channel state information obtained by the receiver. The retry mechanism is that the sending unit 801 is further configured to: transmit a first scalable video coding slice of a second video frame to the display device in a first time period in a second video transmission period, and transmit, to the display device in at least one another time period after the first time period, a portion that is of the first scalable video coding slice and that is discarded in the first time period. One another scalable video coding slice of the second video frame is transmitted in the at least one another time period.


Optionally, the scalable video coding slice includes one basic layer and at least one enhancement layer; and the sending unit 801 is specifically configured to transmit the basic layer and the at least one enhancement layer of the first scalable video coding slice of the second video frame to the display device in the first time period in the second video transmission period based on transmission priorities of the basic layer and the at least one enhancement layer of the first scalable video coding slice of the second video frame.


Optionally, the scalable video coding slice includes the one basic layer and the at least one enhancement layer, the at least one another time period includes a second time period in the second video transmission period, and the another scalable video coding slice includes a second scalable video coding slice of the second video frame. The sending unit 801 is specifically configured to transmit, to the display device in the second time period based on the transmission priorities of the basic layer and the at least one enhancement layer of the first scalable video coding slice and transmission priorities of a basic layer and at least one enhancement layer of the second scalable video coding slice, the portion that is of the first scalable video coding slice and that is discarded and the second scalable video coding slice. The transmission priority of the basic layer of the first scalable video coding slice is higher than the transmission priority of the basic layer of the second scalable video coding slice, the transmission priority of the basic layer of the second scalable video coding slice is higher than the transmission priority of the enhancement layer of the first scalable video coding slice, and the transmission priority of the enhancement layer of the first scalable video coding slice is higher than the transmission priority of the enhancement layer of the second scalable video coding slice.


Optionally, the discarded portion of the first scalable video coding slice includes the basic layer and the at least one enhancement layer of the first scalable video coding slice; and the sending unit 801 is specifically configured to transmit the basic layer and the at least one enhancement layer of the first scalable video coding slice to the display device in the at least one another time period after the first time period.


Optionally, the discarded portion of the first scalable video coding slice includes the at least one enhancement layer of the first scalable video coding slice; and the sending unit 801 is specifically configured to transmit the at least one enhancement layer of the first scalable video coding slice to the display device in the at least one another time period after the first time period.


Optionally, the processing unit 802 is further configured to: after the at least one another time period, reconstruct a reference frame based on a successfully transmitted portion of the first scalable video coding slice; and generate the first scalable video coding of the third video frame based on the reference frame.


All related content of the operations in the foregoing method embodiments may be cited in function descriptions of the corresponding functional modules. Details are not described herein again.


In this embodiment, the wireless projection apparatus 80 is presented in a form of functional modules divided in an integrated manner. The module herein may be an ASIC, a circuit, a processor that executes one or more software or firmware programs, a memory, an integrated logic circuit, and/or another component capable of providing the foregoing functions. In a simple embodiment, a person skilled in the art may figure out that the wireless projection apparatus 80 may be in the form in FIG. 4.


For example, the processor 201 may invoke the computer-executable instructions stored in the memory 203 in FIG. 4, to enable the wireless projection apparatus to perform the wireless projection method in the foregoing method embodiments.


For example, functions/implementation processes of the sending unit 801, the processing unit 802 and the receiving unit 803 in FIG. 8 may be implemented by the processor 201 by invoking the computer-executable instructions stored in the memory 203 in FIG. 4. Alternatively, functions/implementation processes of the processing unit 802 in FIG. 8 may be implemented by the processor 201 by invoking the computer-executable instructions stored in the memory 203 in FIG. 4, and functions/implementation processes of the sending unit 801 in FIG. 8 may be implemented by the transmitter in the communication interface 204 in FIG. 4. The functions/implementation processes of the receiving unit 803 in FIG. 8 may be implemented by the receiver in the communication interface 204 in FIG. 4.


The wireless projection apparatus 80 provided in this embodiment may perform the wireless projection method. Therefore, for technical effects that can be achieved by the wireless projection apparatus 80, refer to the foregoing method embodiments. Details are not described herein again.


In embodiments of this application, the radio television set may be divided into functional modules based on the foregoing method examples. For example, each functional module may be divided based on each function, or two or more functions may be integrated into one processing module. The integrated module may be implemented in a form of hardware, or may be implemented in a form of a software functional module. It should be noted that, in embodiments of this application, module division is an example, and is merely a logical function division. In actual implementation, another division manner may be used.


For example, when each functional module is divided in an integrated manner, FIG. 9 is a schematic diagram of a structure of a wireless projection apparatus 90. The wireless projection apparatus 90 may be a chip or a system on chip in the foregoing radio television set, or another combined device, component, or the like that can implement a function of the foregoing radio television set. The wireless projection apparatus 90 may be configured to perform the function of the radio television set in the foregoing embodiments.


In a possible implementation, the wireless projection apparatus 90 in FIG. 9 includes a receiving unit 901, a processing unit 902, and a sending unit 903. The receiving unit 901 is configured to receive a scalable video coding slice of a first video frame that is transmitted by an electronic device to the display device in a first video transmission period. The sending unit 903 is configured to feed back, to the electronic device, channel state information for transmitting the first video frame. The electronic device selectively enables a retry mechanism based on the channel state information. The processing unit 902 is configured to decode the scalable video coding slice transmitted by the electronic device in the first video transmission period. The receiving unit 901 is further configured to receive a first scalable video coding slice of a second video frame transmitted by the electronic device in a first time period in a second video transmission period in the retry mechanism; and receive a portion that is of the first scalable video coding slice and that is discarded in the first time period and that is transmitted by the electronic device in at least one another time period after the first time period. One another scalable video coding slice of the second video frame is transmitted in the at least one another time period. The processing unit 902 is further configured to decode the first scalable video coding slice of the second video frame transmitted by the electronic device in the first time period and the portion that is of the first scalable video coding slice and that is discarded in the first time period and that is transmitted in the at least one another time period after the first time period.


Alternatively, the scalable video coding slice includes one basic layer and at least one enhancement layer; and the receiving unit 901 is specifically configured to receive the basic layer and the at least one enhancement layer of the first scalable video coding slice of the second video frame that are transmitted by the electronic device in the first time period in the second video transmission period based on transmission priorities of the basic layer and the at least one enhancement layer of the first scalable video coding slice.


Optionally, the scalable video coding slice includes the one basic layer and the at least one enhancement layer; and the receiving unit 901 is specifically configured to receive the basic layer and the at least one enhancement layer of the first scalable video coding slice that are transmitted by the electronic device in the at least one another time period after the first time period.


Optionally, the scalable video coding slice includes the one basic layer and the at least one enhancement layer; and the receiving unit 901 is specifically configured to receive the at least one enhancement layer of the first scalable video coding slice transmitted by the electronic device in the at least one another time period after the first time period.


All related content of the operations in the foregoing method embodiments may be cited in function descriptions of the corresponding functional modules. Details are not described herein again.


In this embodiment, the wireless projection apparatus 90 is presented in a form of functional modules divided in the integrated manner. The module herein may be an ASIC, a circuit, a processor that executes one or more software or firmware programs, a memory, an integrated logic circuit, and/or another component capable of providing the foregoing functions. In a simple embodiment, a person skilled in the art may figure out that the wireless projection apparatus 90 may be in the form in FIG. 4.


For example, the processor 201 may invoke the computer-executable instructions stored in the memory 203 in FIG. 4, to enable the wireless projection apparatus 90 to perform the wireless projection method in the foregoing method embodiments.


For example, functions/implementation processes of the receiving unit 901, the processing unit 902 and the sending unit 903 in FIG. 9 may be implemented by the processor 201 by invoking the computer-executable instructions stored in the memory 203 in FIG. 4. Alternatively, functions/implementation processes of the processing unit 902 in FIG. 9 may be implemented by the processor 201 by invoking the computer-executable instructions stored in the memory 203 in FIG. 4, and functions/implementation processes of the receiving unit 901 in FIG. 9 may be implemented by the receiver in the communication interface 204 in FIG. 4. The functions/implementation processes of the sending unit 903 in FIG. 9 may be implemented by the transmitter in the communication interface 204 in FIG. 4.


The wireless projection apparatus 90 provided in this embodiment may perform the wireless projection method. Therefore, for technical effects that can be achieved by the wireless projection apparatus 90, refer to the foregoing method embodiments. Details are not described herein again.


Optionally, an embodiment of this application further provides a wireless projection apparatus (for example, the wireless projection apparatus may be a chip or a chip system). The wireless projection apparatus includes a processor and an interface, and the processor is configured to read instructions to perform the method in any one of the foregoing method embodiments. In a possible design, the wireless projection apparatus further includes a memory. The memory is configured to store necessary program instructions and necessary data. The memory may invoke program code stored in the memory, to instruct the wireless projection apparatus to perform the method in any one of the foregoing method embodiments. Certainly, the memory may alternatively not be in the wireless projection apparatus. When the wireless projection apparatus is a chip system, the wireless projection apparatus may include a chip, or may include a chip and another discrete component. This is not specifically limited in embodiments of this application.


Specifically, when the wireless projection apparatus 80 in FIG. 8 is a mobile phone, and the wireless projection apparatus 90 in FIG. 9 is a radio television set, the sending unit 801 and the sending unit 903 may be transmitters during the information transmission, the receiving unit 803 and the receiving unit 901 may be receivers during the information transmission, and the receiving unit may be a transceiver. The transceiver, the transmitter, or the receiver may be a radio frequency circuit. When the wireless projection apparatus 80 in FIG. 8 and the wireless projection apparatus 90 in FIG. 9 include a storage unit, the storage unit is configured to store the computer instructions, the processor is communicatively connected to the memory, and the processor executes the computer instruction stored in the memory, so that the wireless projection apparatus performs the method in the method embodiments. The processor may be a general-purpose central processing unit (CPU), a microprocessor, or an application-specific integrated circuit (application-specific integrated circuit, ASIC).


When the wireless projection apparatus in FIG. 8 and the wireless projection apparatus in FIG. 9 are chips, the sending unit 801, the sending unit 903, the receiving unit 803, and the receiving unit 901 may be input and/or output interfaces, pins, circuits, or the like. The processing unit 802 and the processing unit 902 may execute the computer-executable instructions stored in the storage unit, so that the wireless projection apparatus 80 in FIG. 8 and the chip in the wireless projection apparatus 90 in FIG. 9 perform the method in the method embodiments. Optionally, the storage unit is a storage unit in the chip, for example, a register or a cache. Alternatively, the storage unit may be a storage unit that is in the terminal device or the network device and that is located outside the chip, for example, a read-only memory (read-only memory, ROM) or another type of static storage device that can store static information and instructions, or a random access memory (random access memory, RAM).


All or a part of the foregoing embodiments may be implemented by software, hardware, firmware, or any combination thereof. When a software program is used to implement embodiments, embodiments may be implemented completely or partially in a form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the procedure or functions according to embodiments of this application are all or partially generated. The computer may be a general-purpose computer, a dedicated computer, a computer network, or another programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or may be transmitted from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center to another website, computer, server, or data center in a wired (for example, a coaxial cable, an optical fiber, or a digital subscriber line (digital subscriber line, DSL)) or wireless (for example, infrared, radio, or microwave) manner. The computer-readable storage medium may be any usable medium accessible by a computer, or a data storage device, such as a server or a data center, integrating one or more usable media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk drive, or a magnetic tape), an optical medium (for example, a DVD), a semiconductor medium (for example, a solid state drive (solid state drive, SSD)), or the like. In embodiments of this application, the computer may include the apparatus described above.


Although this application is described with reference to embodiments, in a process of implementing this application that claims protection, a person skilled in the art may understand and implement another variation of the disclosed embodiments by viewing the accompanying drawings, disclosed content, and appended claims. In the claims, “comprising” (comprising) does not exclude another component or another step, and “a” or “one” does not exclude a case of multiple. A single processor or another unit may implement several functions enumerated in the claims. Some measures are recorded in dependent claims that are different from each other, but this does not mean that these measures cannot be combined to produce a better effect.


Although this application is described with reference to specific features and embodiments thereof, it is clear that various modifications and combinations may be made to them without departing from the spirit and scope of this application. Correspondingly, the specification and accompanying drawings are merely example description of this application defined by the accompanying claims, and are considered as any of or all modifications, variations, combinations or equivalents that cover the scope of this application. It is clearly that a person skilled in the art can make various modifications and variations to this application without departing from the spirit and scope of this application. This application is intended to cover these modifications and variations of this application, provided that they fall within the scope of the claims and their equivalent technologies of this application.

Claims
  • 1. A wireless projection method, applied to a Wi-Fi communication system, and comprising: transmitting, by an electronic device, a scalable video coding slice of a first video frame to a display device in a first video transmission period;obtaining, by the electronic device, channel state information that is fed back by the display device and that is for transmitting the first video frame; andselectively enabling, by the electronic device, a retry mechanism based on the channel state information, wherein the retry mechanism is that the electronic device transmits a first scalable video coding slice of a second video frame to the display device in a first time period in a second video transmission period, and transmits, to the display device in at least one another time period after the first time period, a portion that is of the first scalable video coding slice and that is discarded in the first time period, wherein one another scalable video coding slice of the second video frame is transmitted in the at least one another time period.
  • 2. The wireless projection method according to claim 1, wherein the scalable video coding slice comprises one basic layer and at least one enhancement layer; and that the electronic device transmits a first scalable coding slice of a second video frame in a first time period in a second video transmission period comprises that the electronic device transmits a basic layer and at least one enhancement layer of the first scalable video coding slice of the second video frame to the display device in the first time period in the second video transmission period based on transmission priorities of the basic layer and the at least one enhancement layer of the first scalable video coding slice of the second video frame.
  • 3. The wireless projection method according to claim 1, wherein the scalable video coding slice comprises the one basic layer and the at least one enhancement layer, the at least one another time period comprises a second time period in the second video transmission period, and the another scalable video coding slice comprises a second scalable video coding slice of the second video frame; and that the electronic device transmits, to the display device in at least one another time period after the first time period, a portion that is of the first scalable video coding slice and that is discarded in the first time period comprises: the electronic device transmits, to the display device in the second time period based on the transmission priorities of the basic layer and the at least one enhancement layer of the first scalable video coding slice and transmission priorities of a basic layer and at least one enhancement layer of the second scalable video coding slice, the portion that is of the first scalable video coding slice and that is discarded and the second scalable video coding slice, wherein the transmission priority of the basic layer of the first scalable video coding slice is higher than the transmission priority of the basic layer of the second scalable video coding slice, the transmission priority of the basic layer of the second scalable video coding slice is higher than the transmission priority of the enhancement layer of the first scalable video coding slice, and the transmission priority of the enhancement layer of the first scalable video coding slice is higher than the transmission priority of the enhancement layer of the second scalable video coding slice.
  • 4. The wireless projection method according to claim 1, wherein the discarded portion of the first scalable video coding slice comprises the basic layer and the at least one enhancement layer of the first scalable video coding slice; and that the electronic device transmits, to the display device in at least one another time period after the first time period, a portion that is of the first scalable video coding slice and that is discarded in the first time period comprises: the electronic device transmits the basic layer and the at least one enhancement layer of the first scalable video coding slice to the display device in the at least one another time period after the first time period.
  • 5. The wireless projection method according to claim 1, wherein the discarded portion of the first scalable video coding slice comprises the at least one enhancement layer of the first scalable video coding slice; and that the electronic device transmits, to the display device in at least one another time period after the first time period, a portion that is of the first scalable video coding slice and that is discarded in the first time period comprises: the electronic device transmits the at least one enhancement layer of the first scalable video coding slice to the display device in the at least one another time period after the first time period.
  • 6. The wireless projection method according to claim 1, further comprising: reconstructing, after the at least one another time period, a reference frame based on a successfully transmitted portion of the first scalable video coding slice; andgenerating first scalable video coding of a third video frame based on the reference frame.
  • 7. A wireless projection method, applied to a Wi-Fi communication system, and comprising: receiving, by a display device, a scalable video coding slice of a first video frame that is transmitted by an electronic device to the display device in a first video transmission period;feeding back, by the display device to the electronic device, channel state information for transmitting the first video frame, wherein the electronic device selectively enables a retry mechanism based on the channel state information;decoding, by the display device, the scalable video coding slice transmitted by the electronic device in the first video transmission period;receiving, by the display device, a first scalable video coding slice of a second video frame transmitted by the electronic device in a first time period in a second video transmission period in the retry mechanism;receiving, by the display device, a portion that is of the first scalable video coding slice, that is discarded in the first time period, and that is transmitted by the electronic device in at least one another time period after the first time period, wherein one another scalable video coding slice of the second video frame is transmitted in the at least one another time period; anddecoding, by the display device, the first scalable video coding slice of the second video frame transmitted by the electronic device in the first time period and the portion that is of the first scalable video coding slice, that is discarded in the first time period, and that is transmitted in the at least one another time period after the first time period.
  • 8. The wireless projection method according to claim 7, wherein the scalable video coding slice comprises one basic layer and at least one enhancement layer; and the receiving, by the display device, a first scalable video coding slice of a second video frame transmitted by the electronic device in a first time period in a second video transmission period comprises: receiving, by the display device, a basic layer and at least one enhancement layer of the first scalable video coding slice of the second video frame that are transmitted by the electronic device in the first time period in the second video transmission period based on transmission priorities of the basic layer and the at least one enhancement layer of the first scalable video coding slice.
  • 9. The wireless projection method according to claim 7, wherein the scalable video coding slice comprises the one basic layer and the at least one enhancement layer; and the receiving, by the display device, a portion that is of the first scalable video coding slice, that is discarded in the first time period, and that is transmitted by the electronic device in at least one another time period after the first time period comprises: receiving, by the display device, the basic layer and the at least one enhancement layer of the first scalable video coding slice that are transmitted by the electronic device in the at least one another time period after the first time period.
  • 10. The wireless projection method according to claim 7, wherein the scalable video coding slice comprises the one basic layer and the at least one enhancement layer; and the receiving, by the display device, a portion that is of the first scalable video coding slice, that is discarded in the first time period, and that is transmitted by the electronic device in at least one another time period after the first time period comprises: receiving, by the display device, the at least one enhancement layer of the first scalable video coding slice transmitted by the electronic device in the at least one another time period after the first time period.
  • 11. A wireless projection apparatus, used in a Wi-Fi communication system, and comprising: a transmitter, configured to transmit a scalable video coding slice of a first video frame to a display device in a first video transmission period;a receiver, configured to obtain channel state information that is fed back by the display device and that is for transmitting the first video frame; anda processor, configured to selectively enable a retry mechanism based on the channel state information obtained by the receiver, wherein the retry mechanism is that the transmitter is further configured to: transmit a first scalable video coding slice of a second video frame to the display device in a first time period in a second video transmission period, and transmit, to the display device in at least one another time period after the first time period, a portion that is of the first scalable video coding slice and that is discarded in the first time period, wherein one another scalable video coding slice of the second video frame is transmitted in the at least one another time period.
  • 12. A wireless projection apparatus, used in a Wi-Fi communication system, and comprising: a receiver, configured to receive a scalable video coding slice of a first video frame that is transmitted by an electronic device to a display device in a first video transmission period;a transmitter, configured to feed back, to the electronic device, channel state information for transmitting the first video frame, wherein the electronic device selectively enables a retry mechanism based on the channel state information; anda processor, configured to decode the scalable video coding slice transmitted by the electronic device in the first video transmission period, wherein the receiver is further configured to: receive a first scalable video coding slice of a second video frame transmitted by the electronic device in a first time period in a second video transmission period in the retry mechanism, and receive a portion that is of the first scalable video coding slice and that is discarded in the first time period and that is transmitted by the electronic device in at least one another time period after the first time period, wherein one another scalable video coding slice of the second video frame is transmitted in the at least one another time period; andthe processor is further configured to decode the first scalable video coding slice of the second video frame transmitted by the electronic device in the first time period and the portion that is of the first scalable video coding slice and that is discarded in the first time period and that is transmitted in the at least one another time period after the first time period.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of International Application No. PCT/CN2020/141022, filed on Dec. 29, 2020, the disclosure of which is hereby incorporated by reference in its entirety.

Continuations (1)
Number Date Country
Parent PCT/CN2020/141022 Dec 2020 WO
Child 18343141 US