A METHOD AND DEVICE FOR COMBINING AUDIO AND VIDEO DATA STREAMS

Abstract
A method for combining audio/video data streams includes: acquiring audio/video data of a target data stream, and storing the audio/video data in a first storage space; reading and decoding audio/video frames of the audio/video data from the first storage space according to an order of timestamps of the audio/video frames of the audio/video data; resampling the decoded audio/video frames based on preset audio/video output parameters; generating position indexes according to timestamps of the resampled audio/video frames, and storing the resampled audio/video frames in a second storage space through the position indexes; and periodically extracting the audio/video frames from the second storage space according to the position indexes, and combining the extracted audio/video frames with audio/video frames of other data streams.
Description
FIELD OF DISCLOSURE

The present disclosure generally relates to the field of multimedia technology and, more particularly, relates to a method and device for combining audio and video data streams.


BACKGROUND

With the development of Internet technology and the continuous acceleration of broadband, the Internet is increasingly connected with people's lives. Watching live streams has become one of the mainstreams of entertainment. At present, live broadcast modes that combine multiple channels of data streams, such as multi-person video sessions, mic-connected live broadcasts, etc. have continuously emerged and are widely accepted.


In the live broadcast modes that include simultaneously input multiple channels of on-demand streams or live streams, each channel of data stream may have different resolution, code rate, audio sampling rate, and audio/video encoding format, etc. This leads to an issue of combining multiple channels of data streams that is not encountered in the traditional live broadcast systems with a single channel of data stream input. At the same time, affected by the factors such as the stream-pushing state of an anchor terminal and the quality of the network transmission, instability of the stream-pulling process of a data stream may occur in each channel of data stream, which makes the issue of combining multiple channels of input data streams even more complicated. Therefore, currently, there is a need for a method that is suitable for inputting multiple channels of data streams, and that may deal with the influence of data stream network fluctuations and conveniently and efficiently achieve the data stream combining process.


BRIEF SUMMARY OF THE DISCLOSURE

To solve the problems in the existing technologies, the embodiments of the present disclosure provide a method and device for combining audio/video data streams. The technical solutions are as follows.


In one aspect, a method for combining audio/video data streams is provided. The method includes:


acquiring audio/video data of a target data stream, and storing the audio/video data in a first storage space;


reading and decoding audio/video frames of the audio/video data from the first storage space according to an order of timestamps of the audio/video frames of the audio/video data;


resampling the decoded audio/video frames based on preset audio/video output parameters;


generating position indexes according to timestamps of the resampled audio/video frames, and storing the resampled audio/video frames in a second storage space through the position indexes; and


periodically extracting the audio/video frames from the second storage space according to the position indexes, and combining the extracted audio/video frames with audio/video frames of other data streams.


Optionally, storing the audio/video data in the first storage space includes:


storing the audio data and the video data included in the audio/video data in an audio storage space and a video storage space of the first storage space, respectively.


Optionally, the method further includes:


if the target data stream is a live stream, in the process of storing the audio/video data in the first storage space, when the first storage space is full, deleting the audio/video data stored earliest in the first storage space, and continuing to store the audio/video data; and


if the target data stream is an on-demand stream, in the process of storing the audio/video data in the first storage space, when the first storage space is full, waiting for the audio/video data stored in the first storage space to finish streaming, and then continuing to store the audio/video data.


Optionally, the method further includes:


determining a size of the first storage space according to a preset maximum playback delay and a maximum network delay; and


in the process of acquiring the audio/video data of the target data stream, adjusting the size of the first storage space according to detected playback delay requirement and/or real-time network delay.


Optionally, the method further includes:


periodically detecting a timestamp-based time-length of the audio/video frames of the audio/video data read and decoded from the first storage space; and


if the timestamp-based time-length is greater than a product of a preset playing rate and a physical time-length used for reading and decoding the audio/video data, then suspending reading the audio/video frames of the audio/video data from the first storage space during a current cycle.


Optionally, periodically detecting the timestamp-based time-length of the audio/video frames of the audio/video data read and decoded from the first storage space includes:


recording a timestamp of the initial audio frame and a timestamp of the initial video frame of the audio/video data read and decoded from the first storage space;


periodically detecting a timestamp of the latest audio frame and a timestamp of the latest video frame of the audio/video data read and decoded from the first storage space; and


determining a difference between a smaller value of the timestamp of the latest audio frame and the timestamp of the latest video frame and a smaller value of the timestamp of the initial audio frame and the timestamp of the initial video frame as the timestamp-based time-length of the audio/video frames of the audio/video data.


Optionally, reading and decoding the audio/video frames of the audio/video data from the first storage space in an order of the timestamps of the audio/video frames of the audio/video data includes:


detecting a timestamp of the latest audio frame and a timestamp of the latest video frame of the decoded audio/video data;


if the timestamp of the latest audio frame is greater than or equal to the timestamp of the latest video frame, reading and decoding the video frames of the audio/video data from the video storage space according to an order of the timestamps of the video frames; and


if the timestamp of the latest audio frame is less than the timestamp of the latest video frame, reading and decoding the audio frames of the audio/video data from the audio storage space according to an order of the timestamps of the audio frames.


Optionally, resampling the decoded audio/video frames based on the preset audio/video output parameters includes:


determining, based on a preset standard video frame rate and a timestamp of a decoded video frame, a position index corresponding to the decoded video frame; and


updating the timestamp of the decoded video frame according to the position index.


Optionally, the method further includes:


if there are a plurality of decoded video frames that correspond to a same position index, retaining the last video frame of the plurality of decoded video frames, and deleting the other video frames of the plurality of decoded video frames; and


if position indexes corresponding to the decoded video frames are discontinuous, determining all vacant position indexes, and copying a video frame corresponding to an adjacent position index of a vacant position index as a video frame corresponding to the vacant position index.


Optionally, resampling the decoded audio/video frames based on the preset audio/video output parameters includes:


converting the decoded audio frames based on a preset audio sampling rate and the number of audio tracks; and


splitting and reorganizing the converted audio frames according to a preset number of sampling points, and determining a timestamp of the first sampling point of a reorganized audio frame as a timestamp of the reorganized audio frame.


Optionally, the method further includes:


determining, according to the preset audio sampling rate and number of sampling points, and the timestamp of the reorganized audio frame, a position index corresponding to a resampled audio frame;


if there are a plurality of resampled audio frames that correspond to a same position index, retaining an audio frame of the plurality of resampled audio frames that are last subjected to the resampling process, and deleting the other audio frames of the plurality of resampled audio frames; and


if position indexes corresponding to the resampled audio frames are discontinuous, determining all vacant position indexes, and setting a silent audio frame for a vacant position index.


Optionally, generating the position indexes according to the timestamps of the resampled audio/video frames includes:


determining a timestamp offset of the target data stream according to a service startup time, and a timestamp of the first audio/video frame in the second storage space and a storing time corresponding to the first audio/video frame;


adjusting a timestamp of a resampled audio/video frame according to the timestamp offset; and


regenerating a position index of the resampled audio/video frame based on the adjusted timestamp of the audio/video frame.


Optionally, storing the resampled audio/video frames in the second storage space through the position indexes includes:


dividing the second storage space into an audio storage space and a video storage space that respectively includes a preset number of cache positions; and


respectively storing the resampled audio frames and the resampled video frames of the audio/video data into the audio storage space and the video storage space according to the position indexes of the audio/video frames.


Optionally, periodically extracting the audio/video frames from the second storage space according to the position indexes includes:


determining a standard frame time-length of the audio/video frames based on the preset audio/video output parameters;


periodically determining, at an interval of the standard frame time-length, a to-be-extracted index according to a current time and the standard frame time-length; and


extracting, from the second storage space, an audio/video frame whose position index is the to-be-extracted index.


Optionally, the method further includes:


creating a silent audio frame if there is no corresponding audio frame, in the second storage space, whose position index is the to-be-extracted index; and


copying the most recently extracted video frame if there is no corresponding video frame, in the second storage space, whose position index is the to-be-extracted index.


In another aspect, a device for combining audio/video data streams is provided. The device includes:


a first storage module that is configured to acquire audio/video data of a target data stream, and store the audio/video data in a first storage space;


a decoding module that is configured to read and decode audio/video frames of the audio/video data from the first storage space according to an order of timestamps of the audio/video frames of the audio/video data;


a resampling module that is configured to resample the decoded audio/video frames based on preset audio/video output parameters;


a second storage module that is configured to generate position indexes according to timestamps of the resampled audio/video frames, and store the resampled audio/video frames in a second storage space through the position indexes; and


a combining module that is configured to periodically extract the audio/video frames from the second storage space according to the position indexes, and combine the extracted audio/video frames with audio/video frames of other data streams.


Optionally, the first storage module is configured to:


Store the audio data and the video data included in the audio/video data in an audio storage space and a video storage space of the first storage space, respectively.


Optionally, the first storage module is further configured to:


if the target data stream is a live stream, in the process of storing the audio/video data in the first storage space, when the first storage space is full, delete the audio/video data stored earliest in the first storage space, and continue to store the audio/video data; and


if the target data stream is an on-demand stream, in the process of storing the audio/video data in the first storage space, when the first storage space is full, wait for the audio/video data stored in the first storage space to finish streaming, and then continue to store the audio/video data.


Optionally, the first storage module is further configured to:


determine a size of the first storage space according to a preset maximum playback delay and a maximum network delay; and


in the process of acquiring the audio/video data of the target data stream, adjust the size of the first storage space according to detected playback delay requirement and/or real-time network delay.


Optionally, the decoding module is further configured to:


periodically detect a timestamp-based time-length of the audio/video frames of the audio/video data read and decoded from the first storage space; and


if the timestamp-based time-length is greater than a product of a preset playing rate and a physical time-length used for reading and decoding the audio/video data, then suspend reading the audio/video frames of the audio/video data from the first storage space during a current cycle.


Optionally, the decoding module is configured to:


record a timestamp of the initial audio frame and a timestamp of the initial video frame of the audio/video data read and decoded from the first storage space;


periodically detect a timestamp of the latest audio frame and a timestamp of the latest video frame of the audio/video data read and decoded from the first storage space; and


determine a difference between a smaller value of the timestamp of the latest audio frame and the timestamp of the latest video frame and a smaller value of the timestamp of the initial audio frame and the timestamp of the initial video frame as the timestamp-based time-length of the audio/video frames of the audio/video data.


Optionally, the decoding module is configured to:


detect a timestamp of the latest audio frame and a timestamp of the latest video frame of the decoded audio/video data;


if the timestamp of the latest audio frame is greater than or equal to the timestamp of the latest video frame, read and decode the video frames of the audio/video data from the video storage space according to an order of the timestamps of the video frames; and


if the timestamp of the latest audio frame is less than the timestamp of the latest video frame, read and decode the audio frames of the audio/video data from the audio storage space according to an order of the timestamps of the audio frames.


Optionally, the resampling module is configured to:


determine, based on a preset standard video frame rate and a timestamp of a decoded video frame, a position index corresponding to the decoded video frame; and


update the timestamp of the decoded video frame according to the position index.


Optionally, the resampling module is further configured to:


if there are a plurality of decoded video frames that correspond to a same position index, retain the last video frame of the plurality of decoded video frames, and delete the other video frames of the plurality of decoded video frames; and


if position indexes corresponding to the decoded video frames are discontinuous, determine all vacant position indexes, and copy a video frame corresponding to an adjacent position index of a vacant position index as a video frame corresponding to the vacant position index.


Optionally, the resampling module is configured to:


convert the decoded audio frames based on a preset audio sampling rate and the number of audio tracks; and


split and reorganize the converted audio frames according to a preset number of sampling points, and determine a timestamp of the first sampling point of a reorganized audio frame as a timestamp of the reorganized audio frame.


Optionally, the resampling module is further configured to:


determine, according to the preset audio sampling rate and number of sampling points, and the timestamp of the reorganized audio frame, a position index corresponding to a resampled audio frame;


if there are a plurality of resampled audio frames that correspond to a same position index, retain an audio frame of the plurality of resampled audio frames that are last subjected to the resampling process, and delete the other audio frames of the plurality of resampled audio frames; and


if position indexes corresponding to the resampled audio frames are discontinuous, determine all vacant position indexes, and set a silent audio frame for a vacant position index.


Optionally, the second storage module is configured to:


determine a timestamp offset of the target data stream according to a service startup time, and a timestamp of a first audio/video frame in the second storage space and a storing time corresponding to the first audio/video frame;


adjust a timestamp of a resampled audio/video frame according to the timestamp offset; and


regenerate a position index of the resampled audio/video frame based on the adjusted timestamp of the audio/video frame.


Optionally, the second storage module is configured to:


divide the second storage space into an audio storage space and a video storage space that respectively includes a preset number of cache positions; and


respectively store the resampled audio frames and the resampled video frames of the audio/video data into the audio storage space and the video storage space according to the position indexes of the audio/video frames.


Optionally, the combining module is configured to:


determine a standard frame time-length of the audio/video frames based on the preset audio/video output parameters;


periodically determine, at an interval of the standard frame time-length, a to-be-extracted index according to a current time and the standard frame time-length; and


extract, from the second storage space, an audio/video frame whose position index is the to-be-extracted index.


Optionally, the combining module is further configured to:


create a silent audio frame if there is no corresponding audio frame, in the second storage space, whose position index is the to-be-extracted index; and


copy the most recently extracted video frame if there is no corresponding video frame, in the second storage space, whose position index is the to-be-extracted index.


The beneficial effects brought by the technical solutions provided by the embodiments of the present disclosure are:


In the embodiments of the present disclosure, the audio/video data of a target data stream is acquired, and the audio/video data is stored in the first storage space. According to the order of the timestamps of the audio/video frames of the audio/video data, the audio/video frames of the audio/video data are read and decoded from the first storage space. Based on the preset audio/video output parameters, the decoded audio/video frames are subjected to the resampling process. Position indexes are generated according to the timestamps of the resampled audio/video frames, and the resampled audio/video frames are stored in the second storage space based on the position indexes. The audio/video frames are periodically extracted from the second storage space according to the position indexes, and the extracted audio/video frames are combined with audio/video frames of other data streams. In this way, the video composition server mitigates the influence of network fluctuations through two levels of data cache. At the same time, by introducing the resampling technology, and by storing and retrieving the audio/video data using timestamps and positions indexes, synchronization of multiple data streams may be accomplished, and the process of combining data streams may be achieved more conveniently and efficiently. The methods provided by the present disclosure decouple the stream-pulling process from the combining process for each data stream simultaneously among various input data streams, which reduces the influence of the stream-pulling process on the combining process. Even if there is a problem in the stream-pulling process for one channel of data stream or there is a problem with the data stream itself, it will not affect other channels of data streams, thereby ensuring the stability of the combined pictures and sound.





BRIEF DESCRIPTION OF THE DRAWINGS

To make the technical solutions in the embodiments of the present disclosure clearer, a brief introduction of the accompanying drawings consistent with descriptions of the embodiments will be provided hereinafter. It is to be understood that the following described drawings are merely some embodiments of the present disclosure. Based on the accompanying drawings and without creative efforts, persons of ordinary skill in the art may derive other drawings.



FIG. 1 is a flowchart of a method for combining audio/video data streams according to some embodiments of the present disclosure;



FIG. 2 is a schematic diagram of a set of resampled video frames according to some embodiments of the present disclosure;



FIG. 3 is a schematic diagram of another set of resampled video frames according to some embodiments of the present disclosure;



FIG. 4 is a schematic diagram of resampled audio frames according to some embodiments of the present disclosure;



FIG. 5 is a schematic diagram of storage in a second storage space according to some embodiments of the present disclosure;



FIG. 6 is a schematic structural diagram of a device for combining audio/video data streams according to some embodiments of the present disclosure; and



FIG. 7 is a schematic structural diagram of a video composition server according to some embodiments of the present disclosure.





DETAILED DESCRIPTION

To make the objective, technical solutions, and advantages of the present disclosure clearer, embodiments of the present disclosure will be made in detail hereinafter with reference to the accompanying drawings.


The embodiments of the present disclosure provide a method for combining audio/video data streams. The execution entity of the method may be a video composition server. The video composition server may be a backend server of a live broadcast platform that is mainly used for combining multiple channels of audio/video data streams and outputting the combined stream. The video composition server may simultaneously receive inputs of multiple channels of audio/video data streams, perform initial buffering, decoding, resampling, and re-buffering processes for each channel of audio/video data stream, and then combine the multiple channels of processed audio/video data streams. The video composition server may support multiple video-processing processes to run simultaneously, where each video-processing process is responsible for the following processing of one channel of audio/video data stream. The video composition server may include a processor, a memory, and a transceiver. The processor may be configured to perform the combining process of the audio/video data streams shown in the following process. The memory may be used to store data required and generated during the process, such as storing the audio/video frames of the audio/video data streams and the configured files in the combining process, etc. The transceiver may be configured to receive and transmit the relevant data during the process, such as receiving the raw audio/video data and outputting the combined audio/video data.


The processing flow shown in FIG. 1 will be made in detail hereinafter with reference to specific implementations. The content may be as follows.


Step 101: acquire audio/video data of a target data stream, and store the audio/video data in a first storage space.


In one implementation, the video composition server may pull an audio/video data stream through the network or read an audio/video data stream from the local storage device, and then combine the acquired audio/video data stream with other audio/video data streams according to requirements. Specifically, taking the target data stream as an example, when a target data stream is added to the video composition server, or when the stream-pulling address of the target data stream changes, the video composition server may acquire the audio/video data of the target data stream based on the stream-pulling address of the target data stream, and create a video-processing process for the target data stream. Afterwards, the video composition server may first store the audio/video data of the target data stream in a first storage space through the video-processing process. Here, the first storage space may be viewed as a preset first-level cache space for storing newly pulled, unprocessed raw data. In addition, if the target data stream needs to be played with a delay, the frame data in the delayed time may be stored in the first storage space.


Optionally, to facilitate the subsequent data processing, the audio/video data of the target data stream may be divided. Correspondingly, partial process of Step 101 may be as follows: storing the audio data and the video data included in the audio/video data in an audio storage space and a video storage space of the first storage space, respectively.


In one implementation, the technical staff may specifically divide the first storage space into an audio storage space for storing audio data and a video storage space for storing video data. Here, the structures and attributes of the audio storage space and the video storage space are identical, with only the types of stored data being different. In this way, after acquiring the audio/video data of the target data stream, the video composition server may divide the audio/video data into audio data and video data according to the data type, and then store the divided audio data and video data in the audio storage space and the video storage space of the first storage space, respectively. This will facilitate a free selection of audio data or video data for the subsequent data processing.


Optionally, the first storage space may support two storage modes: a congestion mode and an overlay mode. For on-demand streams and live streams, different storage modes may be selected. The corresponding process may be as follows: if the target data stream is a live stream, in the process of storing the audio/video data in the first storage space, when the first storage space is full, deleting the audio/video data stored earliest in the first storage space, then continuing to store the audio/video data; if the target data stream is an on-demand stream, in the process of storing the audio/video data in the first storage space, when the first storage space is full, continuing to store the audio/video data after waiting for the audio/video data stored in the first storage space to finish streaming.


In one implementation, in the process of storing the audio/video data of the target data stream in the first storage space, the video composition server may detect whether the first storage space is full. If the first storage space is full, different processes may be employed depending on the data stream type of the target data stream. Specifically, if the target data stream is a live stream, an overlay mode may be selected, that is, delete the audio/video data stored earliest in the first storage space, then continue to store the audio/video data. This avoids a high delay in outputting the audio/video data. If the target data stream is an on-demand stream, i.e., a user does not have a strict requirement on the timeliness of the data, a congestion mode may be selected, that is, continue to store the audio/video data after waiting for the audio/video data stored in the first storage space to finish streaming. This ensures the being-viewed audio/video data to be continuous without interruption. In addition, when the storing operation of the data stream is congested, the data stream acquisition operation may be also synchronously congested, thereby avoiding extra audio/video data to be acquired continuously.


Optionally, the size of the first storage space may be set according to a tolerable network delay and a playback delay. The corresponding process may be as follows: determining a size of the first storage space according to a preset maximum playback delay and a maximum network delay; and, in the process of acquiring the audio/video data of the target data stream, adjusting the size of the first storage space according to the detected playback delay requirement and/or real-time network delay.


In one implementation, the video composition server may set a maximum playback delay (i.e., a time interval between the data being generation and the data being played) and a maximum network delay (i.e., a time interval between the arrival of a preceding frame of data and the arrival of the following frame of data) for each channel of audio/video data stream, and then set the size of the first storage space based on the maximum playback delay and the maximum network delay. It is to be understood that after storing a frame of data in the first storage space, the video composition server needs to process all the frames of data already stored before that frame of data may be processed. Accordingly, if the first storage space is too large, the playback delay will be really high. However, if the first storage space is too small, when the network delay is high, all the frames of data stored in the first storage space may have already finished processing even before the following frame of data arrives. This may greatly reduce the consistency in outputting the audio/video data stream. Further, in the process of acquiring the audio/video data of the target data stream, the video composition server may also detect in real time the playback delay requirement and the real-time network delay, and then adjust the size of the first storage space according to the playback delay requirement and the real-time network delay.


Step 102: read and decode audio/video frames of the audio/video data from the first storage space in an order of timestamps of the audio/video frames of the audio/video data.


In one implementation, due to the influence of network fluctuations and the like, the rate of data stream acquisition and addition to the first storage space may fluctuate. At the same time, the arrival time of the frames of data between the audio data and the video data that have the same timestamp may also be inconsistent. Accordingly, after the video composition server stores the audio/video data of the target data stream in the first storage space, the timestamps of the audio/video frames of the audio/video data may be determined first. The audio/video frames of the audio/video data are then read from the first storage space according to an order of increased timestamp and are decoded.


Optionally, the speed of decoding an audio/video data stream may be controlled according to the real-time standard. The corresponding process may be as follows: periodically detecting the timestamp-based time-length of the audio/video frames of the audio/video data read and decoded from the first storage space; if the timestamp-based time-length is greater than the product of the preset playing rate and the physical time used for reading and decoding the audio/video data, suspending reading the audio/video frames of the audio/video data from the first storage space in the current cycle.


In one implementation, the video composition server may decode the audio/video data according to a certain rate requirement, that is, a timestamp-based time-length corresponding to audio/video data being decoded in a certain physical time should be fixed. When the timestamp-based time-length is greater than N times of a physical time, it means that the decoding speed is too fast. The decoding speed should be reduced at this moment, and the video composition server may sleep to some extent. Here, N is the set playing rate for the audio/video data stream. Accordingly, the video composition server may periodically detect the timestamp-based time-length of the target data stream read and decoded from the first storage space, and then determine whether the timestamp-based time-length is greater than the product of the preset playing rate and the physical time-length used to read and decode the audio/video data. If the timestamp-based time-length is greater than the product of the preset playing rate and the physical time-length used to read and decode the audio/video data, reading the audio/video frames from the first storage space may be suspended in the current cycle. For example, the physical time to start the decoding process is T1, and the timestamp of the first audio/video frame is TS1. When a preset cycle of the decoding process starts, the current physical time is obtained as T2, and the timestamp of the already decoded audio/video frames is TS2. When T2−T1<TS2−TS1, the video composition server may suspend reading the audio/video frames in the current cycle.


Optionally, the foregoing process of determining the timestamp-based time-length may be specifically as follows: recording a timestamp of the initial audio frame and a timestamp of the initial video frame of the audio/video data read and decoded from the first storage space; periodically detecting, from the first storage space, the timestamp of the latest audio frame and the timestamp of the latest video frame of the read and decoded audio/video data; and determining a difference between the smaller value of the timestamp of the latest audio frame and the timestamp of the latest video frame and the smaller value of the timestamp of the initial audio frame and timestamp of the initial video frame as the timestamp-based time-length of the audio/video frames of the audio/video data.


In one implementation, the video composition server may record the timestamp of the initial audio frame and the timestamp of the initial video frame of the audio/video data when beginning to read and decode the audio/video data of the target data stream. In the decoding process, the video composition server may periodically detect the timestamp of the latest audio frame and the timestamp of the latest video frame of the audio/video data read and decoded from the first storage space. Accordingly, the smaller value of the timestamp of the latest audio frame and the timestamp of the latest video frame and the smaller value of the timestamp of the initial audio frame and the timestamp of the initial video frame may be respectively determined, and the difference between the two smaller values is determined as the timestamp-based time-length of the audio/video frames of the audio/video data.


Optionally, based on the aforementioned process of separately storing the audio data and the video data in the first storage space, data may be freely selected to be read and decoded from the audio storage space or the video storage space according to certain rules. Correspondingly, the process of Step 102 may be as follows: detecting the timestamp of the latest audio frame and the timestamp of the latest video frame of the decoded audio/video data; if the timestamp of the latest audio frame is greater than or equal to the timestamp of the latest video frame, reading and decoding video frames of the audio/video data from the video storage space according to an order of the timestamps of the video frames; and if the timestamp of the latest audio frame is smaller than the timestamp of the latest video frame, reading and decoding audio frames of the audio/video data from the audio storage space according to an order of the timestamps of the audio frames.


In one implementation, in the process of reading and decoding the audio/video frames from the first storage space, the video composition server may detect the timestamp of the latest audio frame and the timestamp of the latest video frame of the decoded audio/video data, and then compare the values of the timestamp of the latest audio frame and the timestamp of the latest video frame. If the timestamp of the latest audio frame is greater than or equal to the timestamp of the latest video frame, video frames of the audio/video data are read and decoded from the video storage space according to an order of the timestamps of the video frames. If the timestamp of the latest audio frame is less than the timestamp of the latest video frame, audio frames of the audio/video data are read and decoded from the audio storage space according to an order of the timestamps of the audio frames. This may allow the timestamp gap between the adjacent decoded audio frame and decoded video frame to be as small as possible, so as to ensure the consistency of the logically corresponding time between the audio frames and the video frames.


Step 103: resample the decoded audio/video frames based on preset audio/video output parameters.


In one implementation, since different data streams have different video frame rates, audio sampling rates, and numbers of audio tracks, after decoding the audio/video frames of the target data stream, the video composition server may resample the decoded audio/video frames according to the preset audio/video output parameters including a video frame rate, audio sampling rate, and the like. This may allow the audio/video frames from different data streams to have the same frame rate, audio sampling rate, and number of audio tracks, etc., after the resampling process, thereby facilitating further control of the combining process.


Optionally, the process for resampling the video frames may be as follows: determining a position index corresponding to a decoded video frame based on the preset standard video frame rate and the timestamp of the decoded video frame; and updating the timestamp of the decoded video frame according to the position index.


In one implementation, after decoding a video frame of the target data stream, the video composition server may first determine a position index corresponding to the decoded video frame based on the preset standard video frame rate and the timestamp of the decoded video frame, and then update the timestamp of the decoded video frame according to the position index. Specifically, according to the preset standard video frame rate Fr, the time-length that a video frame lasts is first calculated as Tvf=1/Fr. Afterwards, the timestamp of each decoded video frame is divided by Tvf and the result is rounded down to obtain a position index corresponding to each decoded video frame. The eventually updated timestamp of a decoded video frame is the product of the corresponding position index multiplied by Tvf.


Optionally, if there are multiple decoded video frames that correspond to the same position index, then the last video frame of the multiple decoded video frames is retained while the other decoded video frames in the multiple video frames are deleted. If the position indexes corresponding to the decoded video frames are discontinuous, all the vacant position indexes are determined, and a video frame corresponding to an adjacent position index of each vacant position index is copied as the video frame corresponding to the vacant position index.


In one implementation, after determining the position indexes corresponding to the decoded video frames, the video composition server may further process the position indexes. If there are multiple decoded video frames that correspond to the same position index, the last video frame among the decoded multiple video frames having the same position index is retained while the other decoded video frames are deleted. If the position indexes of the decoded video frames are discontinuous because the timestamp gaps of adjacent video frames are relatively large, all the vacant position indexes may be determined first, and then a video frame of the adjacent position index of a vacant position index is copied as the corresponding video frame for the vacant position index, to allow the position indexes to be continuous.


The above process may refer to FIG. 2 and FIG. 3. The lower arrows in the figures represent the time direction, the parallelograms represent the video frames, and the values inside the parallelograms represent the timestamps of the video frames. If the standard video frame rate is Fr=1/T, then the time-length Tvf that a video frame lasts is T.



FIG. 2 illustrates the video frames and the corresponding timestamps after the resampling process for a data stream with a frame rate of 2/T. It can be seen that the video frames with the timestamps of T and 1.5T correspond to the same position index 1, so only the video frame of 1.5T is retained, and at the same time the timestamp of the video frame of 1.5T is changed to T. After the resampling process, the video data having a frame rate of 2/T is converted into video data having a frame rate of 1/T.



FIG. 3 illustrates the video frames and the corresponding timestamps after the resampling process for a data stream with a frame rate close to 1/T. It can be seen that the processing of video frames with the timestamps of T and 1.5T is the same as illustrated in FIG. 2. A video frame with a timestamp of 6.3T is calculated to have a position index of 6, and the timestamp of that video frame is changed to 6T. At the same time, a video frame with a position index of 5 is missing in the output video frames, so the video frame with a position index of 4 may be copied as the video frame corresponding to the position index 5.


Optionally, the process for resampling the audio frames may be as follows: converting the decoded audio frames based on a preset audio sampling rate and number of audio tracks; splitting and reorganizing the converted audio frames according to a preset number of sampling points, and determining a timestamp of the first sampling point of a reorganized audio frame as the timestamp of the reorganized audio frame.


In one implementation, since many audio encoders have a requirement of a fixed number of sampling points for the input audio frames, in order to facilitate audio mixing and sound effects processing for different data streams, after decoding the audio frames of the target data stream, the video composition server may convert the sampling rate and number of audio tracks of the decoded audio frames based on a preset audio sampling rate and number of audio tracks. According to a preset number of sampling points, the video composition server may further split and reorganize the converted audio frames, and determine a timestamp of the first sampling point of a reorganized audio frame as the timestamp of the reorganized audio frame. FIG. 4 illustrates a schematic diagram of resampled audio frames with a preset number of sampling points of 1024. In the figure, the arrows directing towards the parallelograms represent the splitting points for the audio frames, the parallelograms in the upper row are the audio frames before the reorganization, while the parallelograms in the lower row are the audio frames after the reorganization. The numbers inside the parallelograms are the numbers of sampling points included in the audio frames. It can be seen from FIG. 4 that, after the conversion of the audio sampling rate and the number of audio tracks, the audio frames may have different numbers of sampling points. After splitting and reorganization, each audio frame then contains 1024 sample points.


Optionally, the resampled audio frames may be similarly adjusted based on the position indexes. The specific process may be as follows: determining position indexes corresponding to the resampled audio frames according to the preset audio sampling rate, number of sampling points, and the timestamps of the audio frames; if there are multiple resampled audio frames that correspond to the same position index, retaining the audio frame that is resampled last in the multiple resampled audio frames, while deleting the other audio frames in the multiple audio frames; if the position indexes corresponding to the audio frames are discontinuous, determining all vacant position indexes and setting a silent audio frame for each vacant position index.


In one implementation, similar to the video resampling process, the video composition server may determine position indexes corresponding to the resampled audio frames according to the preset audio sampling rate, number of sampling points, and the timestamps of the audio frames. If there are multiple resampled audio frames that correspond to the same position index, then the last audio frame in the multiple audio frames with the same position index is retained while the other audio frames are deleted. If the position indexes of the resampled audio frames are discontinuous because the timestamp gaps of adjacent audio frames are relatively large, all the vacant position indexes may be determined first, then a silent audio frame is set for each corresponding vacant position index, to allow the position indexes to be continuous.


Step 104: generate position indexes according to timestamps of the resampled audio/video frames, and store the resampled audio/video frames in a second storage space through the position indexes.


In one implementation, after resampling the audio/video frames, the video composition server may generate position indexes according to the timestamps of the resampled audio/video frames, and then store the resampled audio/video frames in a second storage space through the position indexes. This allows the audio frames from different data streams that have the same position index to logically correspond to the same physical time, and the video frames from different data streams that have the same position index to also logically correspond to the same physical time. This, in combination with the same lasting time-length among the resampled audio/video frames from different data streams, facilitates synchronization of different data streams in the subsequent combining process. Here, the second storage space may be viewed as a preset second-level cache space for storing the decoded and resampled data.


Optionally, the approach for generating the position indexes of the resampled audio/video frames may be specifically as follows: determining a timestamp offset of the target data stream according to the service startup time, the timestamp of the first audio/video frame in the second storage space and the storing time corresponding to the first audio/video frame; adjusting the timestamps of the resampled audio/video frames according to the timestamp offset; and regenerating the position indexes of the audio/video frames based on the adjusted timestamps of the resampled audio/video frames.


In one implementation, when initiating a video-combining service, the video composition server may record the current time Tstart (i.e., the service startup time) and record the timestamp TSin of the first audio/video frame and the storing time Tcurr corresponding to the first audio/video frame when the second storage space stores the first audio/video frame of the target data stream, and then calculate the timestamp offset of the target data stream: Off=Tcurr−Tstart−TSin. Thereafter, for all the resampled audio/video frames stored in the second storage space, the video composition server may add the timestamp offset Off to the timestamp of an audio/video frame as a new timestamp of the audio/video frame. In this way, the timestamps of the audio/video frames of data streams acquired at different times or data streams that have different startup timestamps are converted into a unified system timestamp, which facilitates the subsequent synchronization between different data streams. Next, the video composition server may divide the timestamp of an adjusted audio/video frame by the time-length that the frame lasts, and round down the result, to generate a position index of the resampled audio/video frame.


Optionally, the audio data and the video data of the target data stream may be separately stored in the second storage space. Correspondingly, partial process of Step 104 may be as follows: dividing the second storage space into an audio storage space and a video storage space that respectively includes a preset number of cache positions; and respectively storing the resampled audio frames and the resampled video frames of the audio/video data into the audio storage space and the video storage space according to the position indexes of the audio/video frames.


In one implementation, the video composition server may divide the second storage space into an audio storage space and a video storage space that respectively includes a preset number of cache positions. Each cache position is configured to store one audio frame or one video frame. The structures and properties of the audio storage space and the video storage space are identical. Afterwards, the video composition server may respectively store the resampled audio frames and the resampled video frames of the target data stream into the audio storage space and the video storage space according to the position indexes of the audio/video frames. Specific details of the above process may refer to FIG. 5, where the lower rectangular grids correspond to the cache positions in the audio storage space or the video storage space, the values inside the rectangular grids are the cache positions in the cache space, the upper parallelograms represent the audio frames or the video frames, and the numbers inside the parallelograms are the position indexes corresponding to the frames. When there are N cache positions, a remainder obtained by dividing the position index of each resampled audio frame or video frame by N may be used as the storage position. It can be seen that as the position index increases, the frames added to the second storage space cyclically overlay, and the frames whose position indexes differ by kN are located at the same cache position.


Step 105: periodically extract audio/video frames from the second storage space according to the position indexes, and combine the extracted audio/video frames with audio/video frames of other data streams.


In one implementation, after storing the resampled audio/video frames of the target data stream in the second storage space, the video composition server may periodically extract the audio/video frames at a certain time interval from the second storage space according to the position indexes of the resampled audio/video frames, and combine the extracted audio/video frames with audio/video frames of other data streams. Since the video frames from different data streams have the same lasting time-length and the audio frames from different data streams also have the same sampling rate, number of audio tracks, number of sampling points, etc., the video frames or audio frames extracted at one point by the video composition server from multiple different data streams in the second storage space are the frame data that logically correspond to the identical time point. Accordingly, synchronization between different data streams may be achieved, which facilitates performing the further video-combining process, such as video image combination, audio volume adjustment, and audio mixing.


Optionally, the audio/video frames may be extracted from the second storage space by referring to the current time. Correspondingly, the specific process of Step 105 may be as follows: determining a standard frame time-length of the audio/video frames based on the preset audio/video output parameters; periodically determining, at an interval of the standard frame time-length, a to-be-extracted index according to the current time and the standard frame time-length; and extracting an audio/video frame, whose position index is the to-be-extracted index, from the second storage space.


In one implementation, in the process of extracting a resampled audio/video frame from the second storage space, the video composition server may first determine a standard frame time-length of the audio/video frames based on the preset audio/video output parameters, and then use the standard frame time-length as an time interval to periodically determine a to-be-extracted index according to the current time and the standard frame time-length. Here, the to-be-extracted index may be the current time divided by the standard frame time-length. Afterwards, the video composition server may extract an audio/video frame, whose position index is the to-be-extracted index, from the second storage space.


Optionally, if a certain audio/video frame cannot be extracted from the second storage space, a frame-replenishing process may be performed. The corresponding process may be as follows: if there is no corresponding audio frame in the second storage space whose position index is the to-be-extracted index, creating a silent audio frame; and if there is no corresponding video frame in the second storage space whose position index is the to-be-extracted index, copying the most recently extracted video frame.


In one implementation, when no corresponding frame may be obtained from the second storage space according to the position index, the video composition server may perform a frame-replenishing process. For video frames, the video frame most recently extracted from the second storage space may be copied as the currently acquired video frame. For audio frames, a silent audio frame may be created as the currently acquired audio frame, and the number of sampling points of the audio frame is equal to the preset number of sampling points.


It should be noted that when the data stream network fluctuations cause the frame data in the first storage space or the second storage space to be all streamed, if the picture of the data stream at the moment when the data input returns to normal is hoped to be connected to the picture when the screen got stuck for continuous play, then, after a certain period of data being all streamed, the aforementioned timestamp of the initial audio frame and the timestamp of the initial video frame and the associated physical time to start decoding, and the timestamp of the first audio/video frame in the second storage space and the storing time corresponding to the first audio/video frame may be regenerated. If the latest picture of the data stream wants to be played directly, no further action is required.


In the embodiments of the present disclosure, the audio/video data of a target data stream is acquired, and the audio/video data is stored in the first storage space. According to the order of the timestamps of the audio/video frames of the audio/video data, the audio/video frames of the audio/video data are read and decoded from the first storage space. Based on the preset audio/video output parameters, the decoded audio/video frames are subjected to the resampling process. Position indexes are generated according to the timestamps of the resampled audio/video frames, and the resampled audio/video frames are stored in the second storage space based on the position indexes. The audio/video frames are periodically extracted from the second storage space according to the position indexes, and the extracted audio/video frames are combined with audio/video frames of other data streams. In this way, the video composition server mitigates the influence of network fluctuations through two levels of data cache. At the same time, by introducing the resampling technology, and by storing and retrieving the audio/video data using timestamps and positions indexes, synchronization of multiple data streams may be accomplished, and the process of combining data streams may be achieved more conveniently and efficiently.


In addition, between the stream-pulling process and the combining process of each data stream, an intersection occurs only in the second storage space, and there is no additional dependency or control between each other. For example, if there is a need to pull 4 channels of data streams and perform the combining process of kind of 4 windows of picture modes, the 4 channels of data streams may be independently pulled, decoded, and placed into the respective second-level cache. Relatively independent inputs and operations may be ensured between each data stream, and thus problems, such as stream-pulling jams and other problems, occur in one of the channels will not cause input problems to occur in other data streams. Therefore, the methods provided by the present disclosure may decouple the stream-pulling process from the combining process for each data stream simultaneously among various input data streams, which reduces the influence of the stream-pulling process on the combining process. Even if there is a problem in the stream-pulling process for one channel of data stream or there is a problem with the data stream itself, it will not affect other channels of data streams, thereby ensuring the stability of the combined pictures and sound.


Based on the similar technical concept, the embodiments of the present disclosure further provide a device for combining audio/video data streams. As shown in FIG. 6, the device includes:


a first storage module 601 that is configured to acquire audio/video data of a target data stream, and store the audio/video data in a first storage space;


a decoding module 602 that is configured to read and decode audio/video frames of the audio/video data from the first storage space according to an order of timestamps of the audio/video frames of the audio/video data;


a resampling module 603 that is configured to resample the decoded audio/video frames based on preset audio/video output parameters;


a second storage module 604 that is configured to generate position indexes according to timestamps of the resampled audio/video frames, and store the resampled audio/video frames in a second storage space through the position indexes; and


a combining module 605 that is configured to periodically extract the audio/video frames from the second storage space according to the position indexes, and combine the extracted audio/video frames with audio/video frames of other data streams.


Optionally, the first storage module 601 is configured to:


Store the audio data and the video data included in the audio/video data in an audio storage space and a video storage space of the first storage space, respectively.


Optionally, the first storage module 601 is configured to:


if the target data stream is a live stream, in the process of storing the audio/video data in the first storage space, when the first storage space is full, delete the audio/video data stored earliest in the first storage space, and continue to store the audio/video data; and


if the target data stream is an on-demand stream, in the process of storing the audio/video data in the first storage space, when the first storage space is full, wait for the audio/video data stored in the first storage space to finish streaming, and then continue to store the audio/video data.


Optionally, the first storage module 601 is further configured to:


determine a size of the first storage space according to a preset maximum playback delay and a maximum network delay; and


in the process of acquiring the audio/video data of the target data stream, adjust the size of the first storage space according to detected playback delay requirement and/or real-time network delay.


Optionally, the decoding module 602 is further configured to:


periodically detect a timestamp-based time-length of the audio/video frames of the audio/video data read and decoded from the first storage space; and


if the timestamp-based time-length is greater than a product of a preset playing rate and a physical time-length used for reading and decoding the audio/video data, then suspend reading the audio/video frames of the audio/video data from the first storage space during a current cycle.


Optionally, the decoding module 602 is configured to:


record a timestamp of the initial audio frame and a timestamp of the initial video frame of the audio/video data read and decoded from the first storage space;


periodically detect a timestamp of the latest audio frame and a timestamp of the latest video frame of the audio/video data read and decoded from the first storage space; and


determine a difference between a smaller value of the timestamp of the latest audio frame and the timestamp of the latest video frame and a smaller value of the timestamp of the initial audio frame and the timestamp of the initial video frame as the timestamp-based time-length of the audio/video frames of the audio/video data.


Optionally, the decoding module 602 is configured to:


detect a timestamp of the latest audio frame and a timestamp of the latest video frame of the decoded audio/video data;


if the timestamp of the latest audio frame is greater than or equal to the timestamp of the latest video frame, read and decode the video frames of the audio/video data from the video storage space according to an order of the timestamps of the video frames; and


if the timestamp of the latest audio frame is less than the timestamp of the latest video frame, read and decode the audio frames of the audio/video data from the audio storage space according to an order of the timestamps of the audio frames.


Optionally, the resampling module 603 is configured to:


determine, based on a preset standard video frame rate and a timestamp of a decoded video frame, a position index corresponding to the decoded video frame; and


update the timestamp of the decoded video frame according to the position index.


Optionally, the resampling module 603 is further configured to:


if there are a plurality of decoded video frames that correspond to a same position index, retain the last video frame of the plurality of decoded video frames, and delete the other video frames of the plurality of decoded video frames; and


if position indexes corresponding to the decoded video frames are discontinuous, determine all vacant position indexes, and copy a video frame corresponding to an adjacent position index of a vacant position index as a video frame corresponding to the vacant position index.


Optionally, the resampling module 603 is configured to:


convert the decoded audio frames based on a preset audio sampling rate and the number of audio tracks; and


split and reorganize the converted audio frames according to a preset number of sampling points, and determine a timestamp of the first sampling point of a reorganized audio frame as a timestamp of the reorganized audio frame.


Optionally, the resampling module 603 is further configured to:


determine, according to the preset audio sampling rate and number of sampling points, and the timestamp of the reorganized audio frame, a position index corresponding to a resampled audio frame;


if there are a plurality of resampled audio frames that correspond to a same position index, retain an audio frame of the plurality of resampled audio frames that are last subjected to the resampling process, and delete the other audio frames of the plurality of resampled audio frames; and


if position indexes corresponding to the resampled audio frames are discontinuous, determine all vacant position indexes, and set a silent audio frame for a vacant position index.


Optionally, the second storage module 604 is configured to:


determine a timestamp offset of the target data stream according to a service startup time, and a timestamp of a first audio/video frame in the second storage space and a storing time corresponding to the second audio/video frame;


adjust a timestamp of a resampled audio/video frame according to the timestamp offset; and


regenerate a position index of the resampled audio/video frame based on the adjusted timestamp of the audio/video frame.


Optionally, the second storage module 604 is configured to:


divide the second storage space into an audio storage space and a video storage space that respectively includes a preset number of cache positions; and


respectively store the resampled audio frames and the resampled video frames of the audio/video data into the audio storage space and the video storage space according to the position indexes of the audio/video frames.


Optionally, the combining module 605 is configured to:


determine a standard frame time-length of the audio/video frames based on the preset audio/video output parameters;


periodically determine, at an interval of the standard frame time-length, a to-be-extracted index according to a current time and the standard frame time-length; and


extract, from the second storage space, an audio/video frame whose position index is the to-be-extracted index.


Optionally, the combining module 605 is further configured to:


create a silent audio frame if there is no corresponding audio frame, in the second storage space, whose position index is the to-be-extracted index; and


copy the most recently extracted video frame if there is no corresponding video frame, in the second storage space, whose position index is the to-be-extracted index.


In the embodiments of the present disclosure, the audio/video data of a target data stream is acquired, and the audio/video data is stored in the first storage space. According to the order of the timestamps of the audio/video frames of the audio/video data, the audio/video frames of the audio/video data are read and decoded from the first storage space. Based on the preset audio/video output parameters, the decoded audio/video frames are subjected to the resampling process. Position indexes are generated according to the timestamps of the resampled audio/video frames, and the resampled audio/video frames are stored in the second storage space based on the position indexes. The audio/video frames are periodically extracted from the second storage space according to the position indexes, and the extracted audio/video frames are combined with audio/video frames of other data streams. In this way, the video composition server mitigates the influence of network fluctuations through two levels of data cache. At the same time, by introducing the resampling technology, and by storing and retrieving the audio/video data using timestamps and positions indexes, synchronization of multiple data streams may be accomplished, and the process of combining data streams may be achieved more conveniently and efficiently. The methods provided by the present disclosure may decouple the stream-pulling process from the combining process for each data stream simultaneously among various input data streams, which reduces the influence of the stream-pulling process on the combining process. Even if there is a problem in the stream-pulling process for one channel of data stream or there is a problem with the data stream itself, it will not affect other channels of data streams, thereby ensuring the stability of the combined pictures and sound.


It should be noted that, when combining the audio/video data streams, the device provided by the foregoing embodiments for combining the audio/video data streams is only exemplified by the division of the foregoing functional modules. In real applications, the foregoing functions may be achieved by allocating to different functional modules based on the needs. That is, the internal structure of the device may be divided into different functional modules to achieve all or part of the functions described above. In addition, the device for combining the audio/video data streams provided by the foregoing embodiments and the method for combining the audio/video data streams are attributed to the same concept. The specific implementation process for the device may refer to the method-related embodiments, details of which will not be described again here.



FIG. 7 is a schematic structural diagram of a video composition server according to some embodiments of the present disclosure. The video composition server 700 may vary considerably depending on the configuration or performance, and may include one or more central processing units 722 (e.g., one or more processors), a memory 732, and one or more storage media 730 for storing applications 772 or data 777 (e.g., one or one mass storage devices). The memory 732 and the storage media 730 may be a volatile or non-volatile storage device. The programs stored on the storage media 730 may include one or more modules (not shown in the figure), each of which may include a series of operation instructions for a video composition server. Further, the central processing unit(s) 722 may be configured to communicate with the storage media 730, and perform the series of operation instructions stored in the storage media 730 on the video composition server 700.


The video composition server 700 may also include one or more power supplies 726, one or more wired or wireless network interfaces 750, one or more input and output interfaces 758, one or more keyboards 756, and/or one or more operation systems 771, such as Windows Server™, Mac OS X™, Unix™, Linux™, FreeBSD™, and the like.


The video composition server 700 may include a memory, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by one or more processors. The one or more programs include the above-described instructions for combining the audio/video data streams.


A person skilled in the art may understand that all or part of the steps of the above embodiments may take the form of hardware implementation or the form of implementation of programs for instructing relevant hardware. The programs may be stored in a computer-readable storage medium. The storage medium may be a read-only memory, a magnetic disk, or an optical disk, etc.


Although the present disclosure has been described as above with reference to preferred embodiments, these embodiments are not constructed as limiting the present disclosure. Any modifications, equivalent replacements, and improvements made without departing from the spirit and principle of the present disclosure shall fall within the scope of the protection of the present disclosure.

Claims
  • 1. A method for combining audio/video data streams, comprising: acquiring audio/video data of a target data stream, and storing the audio/video data in a first storage space;reading and decoding audio/video frames of the audio/video data from the first storage space according to an order of timestamps of the audio/video frames of the audio/video data;resampling the decoded audio/video frames based on preset audio/video output parameters;generating position indexes according to timestamps of the resampled audio/video frames, and storing the resampled audio/video frames in a second storage space through the position indexes; andperiodically extracting the audio/video frames from the second storage space according to the position indexes, and combining the extracted audio/video frames with audio/video frames of other data streams.
  • 2. The method according to claim 1, wherein storing the audio/video data in the first storage space further includes: storing the audio data and the video data included in the audio/video data in an audio storage space and a video storage space of the first storage space, respectively.
  • 3. The method according to claim 1, further comprising: if the target data stream is a live stream, in the process of storing the audio/video data in the first storage space, when the first storage space is full, deleting the audio/video data stored earliest in the first storage space, and continuing to store the audio/video data; andif the target data stream is an on-demand stream, in the process of storing the audio/video data in the first storage space, when the first storage space is full, waiting for the audio/video data stored in the first storage space to finish streaming, and then continuing to store the audio/video data.
  • 4. The method according to claim 1, further comprising: determining a size of the first storage space according to a preset maximum playback delay and a maximum network delay; andin the process of acquiring the audio/video data of the target data stream, adjusting the size of the first storage space according to detected playback delay requirement and/or real-time network delay.
  • 5. The method according to claim 1, further comprising: periodically detecting a timestamp-based time-length of the audio/video frames of the audio/video data read and decoded from the first storage space; andif the timestamp-based time-length is greater than a product of a preset playing rate and a physical time-length used for reading and decoding the audio/video data, then suspending reading the audio/video frames of the audio/video data from the first storage space during a current cycle.
  • 6. The method according to claim 5, wherein periodically detecting the timestamp-based time-length of the audio/video frames of the audio/video data read and decoded from the first storage space further includes: recording a timestamp of the initial audio frame and a timestamp of the initial video frame of the audio/video data read and decoded from the first storage space;periodically detecting a timestamp of the latest audio frame and a timestamp of the latest video frame of the audio/video data read and decoded from the first storage space; anddetermining a difference between a smaller value of the timestamp of the latest audio frame and the timestamp of the latest video frame and a smaller value of the timestamp of the initial audio frame and the timestamp of the initial video frame as the timestamp-based time-length of the audio/video frames of the audio/video data.
  • 7. The method according to claim 2, wherein reading and decoding the audio/video frames of the audio/video data from the first storage space in an order of the timestamps of the audio/video frames of the audio/video data further includes: detecting a timestamp of the latest audio frame and a timestamp of the latest video frame of the decoded audio/video data;if the timestamp of the latest audio frame is greater than or equal to the timestamp of the latest video frame, reading and decoding the video frames of the audio/video data from the video storage space according to an order of the timestamps of the video frames; andif the timestamp of the latest audio frame is less than the timestamp of the latest video frame, reading and decoding the audio frames of the audio/video data from the audio storage space according to an order of the timestamps of the audio frames.
  • 8. The method according to claim 1, wherein resampling the decoded audio/video frames based on the preset audio/video output parameters further includes: determining, based on a preset standard video frame rate and a timestamp of a decoded video frame, a position index corresponding to the decoded video frame; andupdating the timestamp of the decoded video frame according to the position index.
  • 9. The method according to claim 8, further comprising: if there are a plurality of decoded video frames that correspond to a same position index, retaining the last video frame of the plurality of decoded video frames, and deleting the other video frames of the plurality of decoded video frames; andif position indexes corresponding to the decoded video frames are discontinuous, determining all vacant position indexes, and copying a video frame corresponding to an adjacent position index of a vacant position index as a video frame corresponding to the vacant position index.
  • 10. The method according to claim 1, wherein resampling the decoded audio/video frames based on the preset audio/video output parameters further includes: converting the decoded audio frames based on a preset audio sampling rate and the number of audio tracks; andsplitting and reorganizing the converted audio frames according to a preset number of sampling points, and determining a timestamp of the first sampling point of a reorganized audio frame as a timestamp of the reorganized audio frame.
  • 11. The method according to claim 10, further comprising: determining, according to the preset audio sampling rate and number of sampling points, and the timestamp of the reorganized audio frame, a position index corresponding to a resampled audio frame;if there are a plurality of resampled audio frames that correspond to a same position index, retaining an audio frame of the plurality of resampled audio frames that are last subjected to the resampling process, and deleting the other audio frames of the plurality of resampled audio frames; andif position indexes corresponding to the resampled audio frames are discontinuous, determining all vacant position indexes, and setting a silent audio frame for a vacant position index.
  • 12. The method according to claim 1, wherein generating the position indexes according to the timestamps of the resampled audio/video frames further includes: determining a timestamp offset of the target data stream according to a service startup time, and a timestamp of the first audio/video frame in the second storage space and a storing time corresponding to the first audio/video frame;adjusting a timestamp of a resampled audio/video frame according to the timestamp offset; andregenerating a position index of the resampled audio/video frame based on the adjusted timestamp of the audio/video frame.
  • 13. The method according to claim 12, wherein storing the resampled audio/video frames in the second storage space through the position indexes further includes: dividing the second storage space into an audio storage space and a video storage space that respectively includes a preset number of cache positions; andrespectively storing the resampled audio frames and the resampled video frames of the audio/video data into the audio storage space and the video storage space according to the position indexes of the audio/video frames.
  • 14. The method according to claim 1, wherein periodically extracting the audio/video frames from the second storage space according to the position indexes further includes: determining a standard frame time-length of the audio/video frames based on the preset audio/video output parameters;periodically determining, at an interval of the standard frame time-length, a to-be-extracted index according to a current time and the standard frame time-length; andextracting, from the second storage space, an audio/video frame whose position index is the to-be-extracted index.
  • 15. The method according to claim 14, further comprising: creating a silent audio frame if there is no corresponding audio frame, in the second storage space, whose position index is the to-be-extracted index; andcopying the most recently extracted video frame if there is no corresponding video frame, in the second storage space, whose position index is the to-be-extracted index.
  • 16. A device for combining audio/video data streams, comprising: a first storage module that is configured to acquire audio/video data of a target data stream, and store the audio/video data in a first storage space;a decoding module that is configured to read and decode audio/video frames of the audio/video data from the first storage space according to an order of timestamps of the audio/video frames of the audio/video data;a resampling module that is configured to resample the decoded audio/video frames based on preset audio/video output parameters;a second storage module that is configured to generate position indexes according to timestamps of the resampled audio/video frames, and store the resampled audio/video frames in a second storage space through the position indexes; anda combining module that is configured to periodically extract the audio/video frames from the second storage space according to the position indexes, and combine the extracted audio/video frames with audio/video frames of other data streams.
  • 17. The device according to claim 16, wherein the first storage module is further configured to: store the audio data and the video data included in the audio/video data in an audio storage space and a video storage space of the first storage space, respectively.
  • 18. The device according to claim 16, wherein the first storage module is further configured to: if the target data stream is a live stream, in the process of storing the audio/video data in the first storage space, when the first storage space is full, delete the audio/video data stored earliest in the first storage space, and continue to store the audio/video data; andif the target data stream is an on-demand stream, in the process of storing the audio/video data in the first storage space, when the first storage space is full, wait for the audio/video data stored in the first storage space to finish streaming, and then continue to store the audio/video data.
  • 19. The device according to claim 16, wherein the first storage module is further configured to: determine a size of the first storage space according to a preset maximum playback delay and a maximum network delay; andin the process of acquiring the audio/video data of the target data stream, adjust the size of the first storage space according to detected playback delay requirement and/or real-time network delay.
  • 20. The device according to claim 16, wherein the decoding module is further configured to: periodically detect a timestamp-based time-length of the audio/video frames of the audio/video data read and decoded from the first storage space; andif the timestamp-based time-length is greater than a product of a preset playing rate and a physical time-length used for reading and decoding the audio/video data, suspend reading the audio/video frames of the audio/video data from the first storage space during a current cycle.
  • 21.-30. (canceled)
Priority Claims (1)
Number Date Country Kind
201810517553.2 May 2018 CN national
PCT Information
Filing Document Filing Date Country Kind
PCT/CN2018/091206 6/14/2018 WO 00