The present disclosure relates generally to the field of multiplexing of audio/video data, and, more specifically, to the field of multiplexing of audio/video data for transport stream.
With rapid growth of mobile computing devices, such as laptops, smart phones and touchpads, it has been increasingly popular to use these as source devices to stream multimedia data, including audio and video data, through WiFi networks and for playback on a remote display on a sink device in real time. Especially, the development of Miracast wireless technology enables the devices to be used to stream videos, movies, games, and webpages to external high definition displays without the mediation of a wireless access point. Wi-Fi Direct allows source and display devices to discover one another and provides the underlying device-to-device connectivity for Miracast. Miracast builds upon Wi-Fi Direct with mechanisms to negotiate video capabilities, setup content protection (if needed), stream content, and maintain the video session.
The transmission of audio and video data generally involves capturing data from various devices, such as screen capture from a computer, or audio/video capture from cameras, multiplexing the audio and video data together to form packages that are then transmitted through a communication network in the form of transport steams (TS) such as defined in an MPEG format.
Conventionally, transport Stream had been designed for broadcast or offline applications, and so the associated multiplexers designed to process data for such purposes e.g. through satellite networks or cable networks, where latency rarely poses concern. However, in the context of real time playback of the streamed audio/video, latency between data capture on the source device and playback on the sink device can cause disadvantageous and conspicuous discontinuity of play on the sink display and thus problematic.
It would be advantageous to provide a mechanism to reduce latency in transport stream multiplexing for real time audio/video stream playback for WiFi network. Accordingly, embodiments of the present disclosure employ a transport stream multiplexer that is capable of synchronizing the playback of audio/video data on a sink device with the generation of interleaved audio/video packets by virtue of adaptive packet dropping/throttling processes. A virtual presentation clock reference (PCR) representing a scheduled transmission time of a transport stream packet is calculated based on the network transmission rate and generation of the transport stream packets. The virtual PCR is compared with the corresponding system PCR to derive a time difference. If the time difference indicates that the generation of the transport stream lags behind at the transport stream multiplexer, the multiplexer is capable of selectively dropping packets and incrementing the Virtual PCR accordingly until the virtual PCR and the system VCR become contemporaneous. If the time difference indicates that the generation of the transport stream is faster than the transmission thereof, the multiplexer is capable of throttling packet generation accordingly until the virtual PCR and the system PCR become contemporaneous. This may also save power. Thereby, the transmission latency can be reduced and the real time playback at a sink device can maintain continuity and synchronization with the audio/video capture.
In one embodiment of present disclosure, a method of transmitting an audio/video mixed signal over a communication network comprises: (1) accessing a packetized stream comprising audio payloads and video payloads; (2) assigning the audio payloads in a audio queue and the video payloads in a video queue; (3) converting the packetized stream into a transport stream comprising a plurality of packets that comprise interleaved audio packets and video packets, wherein each of the plurality of packets comprises a transport header and a payload; (4) deriving a virtual clock reference based on a transmission bandwidth through the communication network with respect to the plurality of packets; (5) deriving a time difference between the virtual clock reference and a system clock reference with respect to transmission of the plurality of packets through the communication network; and (6) if the time difference indicates that the virtual clock reference falls behind the system clock reference by at least a threshold, selectively dropping a number of payloads until the virtual PCR and the system clock reference are determined to be synchronous. The method may further comprise exclusively dropping payloads from selected video packets before dropping an audio payload, avoiding dropping a video payload that comprises a reference video frame, and avoiding dropping a payload that comprises a packetized elementary stream header (PES). The method may further comprise incrementing the virtual clock reference each time a payload is dropped or a packet is transmitted, and incrementing the system clock reference each time a packet is transmitted. If the virtual clock reference is ahead of the system lock reference, the transmission may be suspended. The method may further comprise sending a physical null packet or a virtual null packet while the audio and video queues are empty and incrementing the virtual clock reference accordingly.
In another embodiment of present disclosure, a non-transitory computer-readable storage medium comprising instructions for transmitting signals over a wireless network, the instructions for: (1) receiving a packetized elementary stream (PES) comprising audio payloads and video payloads; (2) enqueuing the audio payloads in a audio queue and the video payloads in a video queue; (3) inserting a header to the PES; (4) packetizing the PES into a plurality of transport packets that comprise multiplexed audio packets and video packets, wherein each of the plurality of transport packets comprises a transport header and a payload; (5) calculating a virtual clock reference based on a transmission bandwidth of the wireless network; (6) deriving a time difference between the virtual clock reference and a system clock reference; and (7) if the time difference indicates that the virtual clock reference falls behind the system clock reference by at least a threshold, selectively dropping a number of payloads until the virtual clock reference and the system clock reference is determined to be synchronous.
In another embodiment of present disclosure, a device operable to transmit audio/video payloads to a remote computing device through a communication network, the device comprising: a processor; a network circuit enabling the device to access with the communication network; a memory coupled with the network circuit, the memory operable to store instructions that, when executed by the processor, perform a method of: (1) accessing a packetized stream comprising audio payloads and video payloads; (2) enqueuing the audio payloads in a audio queue and the video payloads in a video queue; (3) inserting a header to the packetized stream; (4) packetizing the packetized stream into a plurality of transport packets that comprise multiplexed audio packets and video packets, wherein each of the plurality of transport packets comprises a transport header and a payload; (5) calculating a virtual clock based on a transmission bandwidth of the communication network; (6) deriving a time difference between the virtual clock and a system clock with respect to transmission of the plurality of transport packets; and (7) if the time difference indicates that the virtual PCR falls behind the system PCR by at least a threshold latency, selectively dropping a number of payloads until the virtual PCR and the system PCR is determined to be synchronous.
This summary contains, by necessity, simplifications, generalizations and omissions of detail; consequently, those skilled in the art will appreciate that the summary is illustrative only and is not intended to be in any way limiting. Other aspects, inventive features, and advantages of the present invention, as defined solely by the claims, will become apparent in the non-limiting detailed description set forth below.
Embodiments of the present invention will be better understood from a reading of the following detailed description, taken in conjunction with the accompanying drawing figures in which like reference characters designate like elements and in which:
Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the preferred embodiments, it will be understood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of embodiments of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be recognized by one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to unnecessarily obscure aspects of the embodiments of the present invention. The drawings showing embodiments of the invention are semi-diagrammatic and not to scale and, particularly, some of the dimensions are for the clarity of presentation and are shown exaggerated in the drawing Figures. Similarly, although the views in the drawings for the ease of description generally show similar orientations, this depiction in the Figures is arbitrary for the most part. Generally, the invention can be operated in any orientation.
Notation and Nomenclature
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present invention, discussions utilizing terms such as “processing” or “accessing” or “executing” or “storing” or “rendering” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories and other computer readable media into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. When a component appears in several embodiments, the use of the same reference numeral signifies that the component is the same component as illustrated in the original embodiment.
According to the illustrated embodiment, the implementation of the TsMUX 100 may comprise two execution threads, a receive thread and a scheduler thread. The TsMUX can request buffers assigned for an associated client program. Upon receiving the PES, the receive thread can create PES headers and insert them in the packets, enqueue the audio and video payload to their respective queues, and notify the scheduler thread of the PES arrival or payload submission event.
Based on the information contained in a PES header the scheduler thread 112 can select queues 113 and 114 to read data, interleave the audio and video payloads, and insert transport headers to generate a transport stream 122. The transport stream 122 is buffered at the output buffer 121 and then sent to the client program from which the data may be transmitted via a WiFi network to another device, e.g., a sink device, for demultiplexing playback. In some embodiments, the scheduler thread may remain in a sleep mode until a payload is received by the receive thread.
In some embodiments, the TsMUX 100 maintains no internal copy of the audio/video data. In some embodiments, the receive thread may be associated with an internal circular buffer that operate to decouple the TsMUX 100 from the capture module 101 to avoid backpressure thereon.
At the time of copy to the output buffer 220 the PES header 211 and the payload 212 are preserved. In addition, a transport header, e.g., 4 or 8 bytes, that include information regarding the type of payloads and clock information, is created and inserted between the PES header 211 and the payload 212. Thus, the transport packet 220 includes a PES header 211, a transport stream header 221 and the video or audio payload 212. In this manner, the packets containing audio and video payloads are interleaved and copied to the output buffer until transmitted. The content of the headers and the sizes of the headers and the payloads are generated in accordance with a specification defined by a standard, such as Mpeg2 Transport Stream.
At 402, the TsMUX obtains output buffers to store the transport stream packets after they are interleaved and before they are transmitted via the network. The TsMUX can request a client program to allocate buffers and maintain an internal list of the available buffers. Once the list is depleted, the TsMUX may request for additional buffers. When the TsMUX gets the output buffer from the list, the TsMUX can determine the amount of packet that can be fit in the buffer at 403.
At 404, the TsMUX gets clock information including the system PCR clock (PCR_SYS_CLK) and a virtual PCR clock (PCR_DERIVED). The system PCR clock represent the actual timing of sending a packet to the network. The virtual PCR clock represents a scheduled time of sending a packet and is calculated based on the number of packets that have been generated and the bandwidth or transmission rate of the network. Each time a packet is sent to the network, the virtual reference clock can be incremented or updated.
At 405, the TsMUX determines if the system PCR and virtual PCR are contemporaneous with each other. If not, the TsMUX selectively and intelligently drops or throttle packets at 406 in order to keep synchronization of the multiplexing and the transmission, as will be discussed in greater details with reference to
If the TsMUX is on schedule as indicated by comparison of system PCR and the virtual PCR at 405, the TsMUX checks if PCR_DERIVED is equal to per_send_time which indicates the time to send a packet including the virtual PCR information. If yes, such a packet is sent, the per_send_time is updated, and the virtual PCR is incremented at 408.
If the PCR_DERIVED is not equal to the per_send_time at 407, the TsMUX checks is there is any data in a queue for interleaving at 409. If both queues are empty, the TsMUX can insert a physical NULL packet which contains no data but still consumes the transmission bandwidth at 410 in order to maintain a constant bit rate. Alternatively, a virtual NULL packet can be inserted which consumes no bandwidth such that the other program sharing the transmission channel can use the spared bandwidth. In either scenario, the virtual PCR is incremented accordingly at 413.
However, if it is determined that either or both queues contain data, the TsMUX can generate the transport stream packets, copy to the output buffer at 412 and increment the PCR_DERIVED accordingly at 413. In some embodiment, the scheduler thread may wait for the output buffer to fill up before dispatching the packets to the socket. However, in some other embodiments, the packets can be dispatched as soon as a payload is consumed and multiplexed without waiting for the output buffer to be full. In this manner, video jitter may be advantageously reduced
On the other hand, if the virtual PCR is ahead of the system PCR, by at least a predetermined threshold as determined at 506, the TsMUX may suspend supplying transport stream to the socket. In this embodiment, scheduler thread may yield to other threads, e.g., of the other program and enter into a sleep mode or a power saving mode at 507 until the next packet submission event or for a predetermined interval. The foregoing 501-507 are repeated until the system PCR and the virtual PCR are synchronized.
In some embodiments, when an output buffer is sent to a client socket program for transmitting and if the network is slow, the TsMUX will stall all subsequent operations until the socket has finished accepting the output buffer. Thus, the status of the output buffer can also be used as a network feedback.
If the list size is less than 1 as determined in 602, and there is enough time to acquire a buffer without causing latency as determined at 603, and there is buffer available as determined at 604, the TsMUX can call for additional buffers, e.g., from the corresponding client program, and send the packet to the additional output buffer. However, if there is not enough time to acquire a buffer, the TsMUX determines whether the next packet is too important to be dropped, e.g. based on the information contained in a corresponding header. If yes, the TsMUX waits until an additional buffer becomes available at 606, and then sends the next packet to the additional buffer 608. If the packet is eligible for dropping, e.g. does not contain reference video frame data, or a PES header, the packet is dropped at 607.
The transport stream can be buffered at 713 and then send to the sink device 720 for multiplexing and playback by way of the WiFi network 730. In some embodiments, the source device and the sink devices may be both Miracast certified devices that can communicate with each other by virtue of Miracast. The source device 700 and the sink device 720 can be smart phones, laptops, smart TVs, video cameras, touch pads, game consoles, and so on. The TsMUX program can be implemented in Fortran, C++, or any other programming languages known to those skilled in the art.
Although certain preferred embodiments and methods have been disclosed herein, it will be apparent from the foregoing disclosure to those skilled in the art that variations and modifications of such embodiments and methods may be made without departing from the spirit and scope of the invention. It is intended that the invention shall be limited only to the extent required by the appended claims and the rules and principles of applicable law.
Number | Name | Date | Kind |
---|---|---|---|
5467342 | Logston et al. | Nov 1995 | A |
5668601 | Okada et al. | Sep 1997 | A |
5790543 | Cloutier | Aug 1998 | A |
5966387 | Cloutier | Oct 1999 | A |
6081299 | Kesselring | Jun 2000 | A |
6157674 | Oda et al. | Dec 2000 | A |
6263396 | Cottle et al. | Jul 2001 | B1 |
6421050 | Ruml et al. | Jul 2002 | B1 |
6429902 | Har-Chen et al. | Aug 2002 | B1 |
6665751 | Chen et al. | Dec 2003 | B1 |
6948185 | Chapel et al. | Sep 2005 | B1 |
7027713 | Hallberg | Apr 2006 | B1 |
7404002 | Pereira | Jul 2008 | B1 |
7516255 | Hobbs | Apr 2009 | B1 |
7558869 | Leon | Jul 2009 | B2 |
7870496 | Sherwani | Jan 2011 | B1 |
7940801 | Gervais | May 2011 | B2 |
8325214 | Hildreth | Dec 2012 | B2 |
8520050 | Blackburn et al. | Aug 2013 | B2 |
8555328 | Emura | Oct 2013 | B2 |
8687008 | Karandikar et al. | Apr 2014 | B2 |
8738814 | Cronin | May 2014 | B1 |
8745280 | Cronin | Jun 2014 | B1 |
8886763 | Bose et al. | Nov 2014 | B2 |
9619916 | Steinke | Apr 2017 | B2 |
9697629 | Vetter et al. | Jul 2017 | B1 |
20020128822 | Kahn | Sep 2002 | A1 |
20020150126 | Kovacevic | Oct 2002 | A1 |
20030076848 | Bremler-Barr et al. | Apr 2003 | A1 |
20040057381 | Tseng et al. | Mar 2004 | A1 |
20040080533 | Nishtala et al. | Apr 2004 | A1 |
20040167893 | Matsunaga et al. | Aug 2004 | A1 |
20040239681 | Robotham et al. | Dec 2004 | A1 |
20040243257 | Theimer | Dec 2004 | A1 |
20050028222 | Megeid | Feb 2005 | A1 |
20050036546 | Rey et al. | Feb 2005 | A1 |
20050084232 | Herberger et al. | Apr 2005 | A1 |
20050091571 | Leichtling | Apr 2005 | A1 |
20050132264 | Joshi et al. | Jun 2005 | A1 |
20050175098 | Narasimhan et al. | Aug 2005 | A1 |
20060007958 | Kang et al. | Jan 2006 | A1 |
20060069799 | Hundscheidt et al. | Mar 2006 | A1 |
20060146850 | Virdi et al. | Jul 2006 | A1 |
20060165129 | Li | Jul 2006 | A1 |
20060165166 | Chou et al. | Jul 2006 | A1 |
20060184697 | Virdi et al. | Aug 2006 | A1 |
20060200253 | Hoffberg et al. | Sep 2006 | A1 |
20060230171 | Dacosta | Oct 2006 | A1 |
20060248559 | Michener et al. | Nov 2006 | A1 |
20060282855 | Margulis | Dec 2006 | A1 |
20070067480 | Beek et al. | Mar 2007 | A1 |
20070126875 | Miyamaki | Jun 2007 | A1 |
20070180106 | Pirzada et al. | Aug 2007 | A1 |
20070201388 | Shah et al. | Aug 2007 | A1 |
20070276954 | Chan et al. | Nov 2007 | A1 |
20070294173 | Levy et al. | Dec 2007 | A1 |
20080019398 | Genossar et al. | Jan 2008 | A1 |
20080022335 | Yousef | Jan 2008 | A1 |
20080052414 | Panigrahi et al. | Feb 2008 | A1 |
20080055311 | Aleksic et al. | Mar 2008 | A1 |
20080101466 | Swenson et al. | May 2008 | A1 |
20080165280 | Deever et al. | Jul 2008 | A1 |
20090207172 | Inoue et al. | Aug 2009 | A1 |
20090222873 | Einarsson | Sep 2009 | A1 |
20090225038 | Bolsinga et al. | Sep 2009 | A1 |
20090252227 | Nepomucenoleung | Oct 2009 | A1 |
20090284442 | Pagan | Nov 2009 | A1 |
20100020025 | Lemort et al. | Jan 2010 | A1 |
20100036963 | Gahm et al. | Feb 2010 | A1 |
20100052843 | Cannistraro | Mar 2010 | A1 |
20100063992 | Ma et al. | Mar 2010 | A1 |
20100064228 | Tsern | Mar 2010 | A1 |
20100135381 | Hamamoto et al. | Jun 2010 | A1 |
20100169842 | Migos | Jul 2010 | A1 |
20100201878 | Barenbrug et al. | Aug 2010 | A1 |
20100254622 | Kamay et al. | Oct 2010 | A1 |
20110058554 | Jain et al. | Mar 2011 | A1 |
20110106743 | Duchon | May 2011 | A1 |
20110145863 | Alsina et al. | Jun 2011 | A1 |
20110255841 | Remennik et al. | Oct 2011 | A1 |
20110263332 | Mizrachi | Oct 2011 | A1 |
20110270600 | Bose et al. | Nov 2011 | A1 |
20110271200 | Kikkawa et al. | Nov 2011 | A1 |
20110276710 | Mighani et al. | Nov 2011 | A1 |
20110314093 | Sheu et al. | Dec 2011 | A1 |
20120005365 | Ma et al. | Jan 2012 | A1 |
20120033140 | Xu | Feb 2012 | A1 |
20120054671 | Thompson et al. | Mar 2012 | A1 |
20120076197 | Byford et al. | Mar 2012 | A1 |
20120092277 | Momchilov | Apr 2012 | A1 |
20120092299 | Harada et al. | Apr 2012 | A1 |
20120092563 | Kwon et al. | Apr 2012 | A1 |
20120230389 | Laurent et al. | Sep 2012 | A1 |
20120250762 | Kaye et al. | Oct 2012 | A1 |
20120254456 | Visharam et al. | Oct 2012 | A1 |
20120262379 | King | Oct 2012 | A1 |
20120265892 | Ma et al. | Oct 2012 | A1 |
20120311043 | Chen et al. | Dec 2012 | A1 |
20120317236 | Abdo et al. | Dec 2012 | A1 |
20130031482 | Saul et al. | Jan 2013 | A1 |
20130093776 | Chakraborty et al. | Apr 2013 | A1 |
20130097309 | Ma et al. | Apr 2013 | A1 |
20130107930 | Le Faucheur et al. | May 2013 | A1 |
20130111019 | Tjew et al. | May 2013 | A1 |
20130120651 | Perry | May 2013 | A1 |
20130159393 | Imai et al. | Jun 2013 | A1 |
20130246932 | Zaveri et al. | Sep 2013 | A1 |
20130254330 | Maylander et al. | Sep 2013 | A1 |
20130290858 | Beveridge | Oct 2013 | A1 |
20130300633 | Horio et al. | Nov 2013 | A1 |
20130307847 | Dey et al. | Nov 2013 | A1 |
20140053025 | Marvasti et al. | Feb 2014 | A1 |
20140068520 | Missig et al. | Mar 2014 | A1 |
20140085314 | Steinke | Mar 2014 | A1 |
20140108940 | Diercks et al. | Apr 2014 | A1 |
20140122562 | Turner et al. | May 2014 | A1 |
20140143296 | Odorovic et al. | May 2014 | A1 |
20140143297 | Ibarria | May 2014 | A1 |
20140189091 | Tamasi et al. | Jul 2014 | A1 |
20140380182 | Lu et al. | Dec 2014 | A1 |
20150019670 | Redmann | Jan 2015 | A1 |
20150026309 | Radcliffe et al. | Jan 2015 | A1 |
20150207837 | Guerrera et al. | Jul 2015 | A1 |
20150215359 | Bao et al. | Jul 2015 | A1 |
20150340009 | Loeffler et al. | Nov 2015 | A1 |
Number | Date | Country |
---|---|---|
2073552 | Jun 2009 | EP |
20144101 | Nov 2010 | TW |
201041401 | Nov 2010 | TW |
201202975 | Jan 2012 | TW |
201223256 | Jun 2012 | TW |
201230850 | Jul 2012 | TW |
Entry |
---|
Sudhakaran, Sareesh. The Difference between Encoding, Transcoding and Rendering (online). Wolfcrow, Jan. 29, 2013. Retrieved from the Internet: URL: http//web.archive.org/web/20130511025703/http://wolfcrow.com/blog/the-dfference-between-encoding-transcoding-and-rendering. |
Sudhakaran, Sareesh. The difference between Encoding, Transcoding and Rendering (online). Wolfcrow, Jan. 29, 2013. Retrieved from the Internet: URL: http//web.archive.org/wewb20130511025703/http://wolfcrow.com/blog/the-difference-between-encoding-transcoding-and rendering. |
Number | Date | Country | |
---|---|---|---|
20150036695 A1 | Feb 2015 | US |