The present invention relates in general to data transmission including redundant data transmission.
Data communications protocols such as Transmission Control Protocol have been developed over many years. These protocols transmit data in packets, which can be lost during transmission due to a variety of factors, such as transmission errors and network congestion. To address this problem, it is known to provide data redundancy, in which a packet is transmitted more than once. For example, a space satellite can transmit a low bit rate signal over a lossy forward link to a receiving station. The satellite may not have a return link to communicate whether a packet is received. Therefore, the satellite transmits redundant packets to increase the probability of successful reconstruction of the signal at the receiving station.
Apparatuses and methods for redundant transmission of data are disclosed. One aspect of the disclosed embodiments is a method for decoding an encoded data signal. The computer-implemented method includes accessing in a memory a set of signal elements and receiving an encoded data signal at a computing device. The encoded data signal includes a plurality of signal fragments each having a projection value that has been calculated as a function of at least one signal element of the set of signal elements and at least a portion of the encoded data signal, and a value associating each respective signal fragment with the at least one signal element used to calculate the projection value. The computing device determines a plurality of amplitude values wherein at least one of the plurality of amplitude values is based on the projection value in at least one of the plurality of signal fragments. A decoded signal is determined using the plurality of amplitude values and signal elements associated with the at least one of the plurality of signal fragments.
In another aspect of the disclosed embodiments, a method is taught for decoding an encoded data signal. The computer-implemented method includes accessing in a memory a set of signal elements and receiving, at a computing device, the encoded data signal including a plurality of signal fragments each having a projection value and an index value. A plurality of selected signal elements are identified from the set of signal elements stored in the memory, at least one of the selected signal elements corresponds to the index value in at least one of the plurality of signal fragments. The computing device determines a decoded data signal based on the plurality of selected signal elements and at least one projection value included in the plurality of signal fragments.
In yet another aspect of the disclosed embodiments, an apparatus is taught for decoding an encoded data signal. The apparatus comprises a memory including a set of signal elements and a processor in communication with the memory. The processor is configured to execute instructions to receive the encoded data signal including a plurality of signal fragments each having a projection value and an index value; identify, using at least one index value included in the plurality of signal fragments, a plurality of selected signal elements in the set of signal elements; and determine a decoded data signal based on the plurality of selected signal elements and at least one projection value included in the plurality of signal fragments.
The description herein makes reference to the accompanying drawings wherein like reference numerals refer to like parts throughout the several views, and wherein:
In one embodiment, a signal (S) is decomposed during encoding into a plurality of data elements or fragments. In general terms, each fragment includes an index value and a projection value. The index value points to an entry in a dictionary or other data structure of signal elements Ui. This encoding process can take place iteratively so that signal fragments are added one-by-one to a growing list of fragments until the list contains sufficient fragments to permit adequate reconstruction of the original signal S.
The first step in this iterative process can include generating a reconstructed signal (REC) by decoding the fragments already in the list. This step can be skipped for the first iteration if the list is empty. Alternatively, the list can be initialized with an arbitrarily selected signal element selected without use of the reconstructed signal REC. A residual signal (R) is determined by subtracting the reconstructed signal REC from the original signal S. If the residual signal R is at or below a threshold (which can be zero), encoding processing can be completed. Otherwise, a new fragment is added to the list of fragments.
In one illustrative embodiment, the process of adding a new fragment begins by searching the dictionary for the signal element Ui that maximizes the scalar product <R, Ui>, where R is the residual signal. The new fragment can include the value of i (the location in the dictionary where the selected signal element Ui resides) and a projection value, which can be equal to the scalar product of <S, Ui>. Next, the new fragment is added to the existing list of fragments. A repetition factor is assigned to the new fragment. In one illustrative embodiment, the repetition factor can be a monotonic function of the scalar product's absolute value <R, Ui>, which provides an indication of the transfer energy associated with (and thus may indicate the relative importance of) the fragment. In the output (Y) of the encoder, the fragment can be duplicated depending on the value of the repetition factor. The dictionary entry corresponding to the newly-added fragment is deleted or flagged so that it will not be used in a subsequently generated fragment.
The process of decoding is used by the encoder itself (to create the reconstructed signal REC as described above) and also by a decoder in a receiving station that receives the encoded signal. The decoding process can begin by removing duplicate fragments having the same index value. An array of amplitude values A0 . . . AN is determined for the N remaining fragments by performing a transform using the projection values contained in at least a plurality of fragments. The reconstructed signal REC can then be generated by computing the sum over all N fragments of Ak*Ui,k.
Communications network 20 can be implemented in a variety of configurations, and the embodiment of
Owing to power constraints, weather conditions or other circumstances, there may be no return link to permit transmission of data from receiving station 24 to transmission station 22. As a result, transmission station 22 can send data to receiving station 24 without acknowledgment of the reception of packets. In other embodiments, a return link can be provided so that transmission station 22 and receiving station 24 can have full or partial bi-directional communication.
Forward link 26 can be lossy, owing to poor signal quality, transmission error, network congestion, signal obstruction or other causes. If forward link 26 is lossy, packets transmitted between transmission station 22 and receiving station 24 can be lost, resulting in signal degradation. To ensure successful transmission, transmission station 22 can transmit redundant packets of data along forward link 26. This consumes bandwidth and it can be beneficial if the redundant transmission is optimized to maximize the fidelity of the reconstructed signal at receiving station 24.
It will be understood that the term packets as used in this specification is used in its broadest sense and includes datagrams, segments, blocks, cells and/or frames depending on the transmission protocol that is used. The embodiments as described herein can be used with a range of protocols and the invention is not intended to be limited to any one particular protocol.
Still referring to
Exemplary structures for the fragments, fragment list and dictionary are described below. Dictionary 40 and fragment list 42 can be stored in any suitable memory (such as RAM or disk storage) and can both reside on the same physical storage area or can reside on different storage areas of the same or different types.
Encoder 32 includes a decoder stage 46 in a return path 48. Decoder stage 46 generates a reconstruction signal REC of input signal S. Reconstruction signal REC is synthesized from the fragment list 42 using the dictionary 40. An example of the operation of decoder stage 46 is provided in
Input signal S can be data of any type, but in some applications can be audio or video.
Referring to
Dictionary 40 can be optimized based on the particular type or category of signal used as input. In some cases, for example, dictionary 40 can be comprised of synthesized samples such as time-translated sinusoidal audio waves or Fourier basis. Alternatively, dictionary 40 can comprise signal elements that are random excerpts from actual signals. The entries in dictionary 40 can be combined and trimmed with genetic algorithms, for example. The dictionary can be updated over time based on empirical results to improve performance of encoder 32. Dictionaries 40 and 64 can be implemented in the form of any suitable data structure, such as a relational database table, list, metadata repository, associative array or any other structure that can store signal elements in a manner that permits the elements to be looked-up, referenced or otherwise accessed such as for example by using an index value. The term “dictionary” refers to all such structures. In the exemplary embodiments, the index values stored in fragment list 42 are intended to match unique index values of a dictionary such as the values of index field 52 of dictionary 40. Other implementations for looking up signal elements (such as hashing of index values) can be used as well and the term “index” refers to all such implementations by which a value can be used to locate a signal element in a dictionary.
Beginning with block 82, a search is made of dictionary 40 to find that one of the records 50 (
Any suitable technique can be used to compute that scalar products referred to above. For example, the scalar product of two vectors u={ui}i=1 . . . N and v={vi}i=1 . . . N can be expressed as:
Having located the desired dictionary record, control moves to block 84, where the value of projection PN+1 is determined for use in the new fragment (N+1). The value PN+1 can be determined as a function of the original input signal S and the value Ui,N+1 of the newly selected dictionary record. In this case, for example, the value PN+1 can be calculated as the scalar product of the input signal S and the newly selected Ui:
PN+1=<S,Ui,N+1> (Equation 2)
Control then moves to block 86, where a new fragment is created and appended to to the existing list of N fragments as new fragment {iN+1, PN+i}. As explained above, each fragment 68 contains an index value (i) stored in index value field 70 and a projection value (PN+1) stored in the projection value field 72. The value of the index field 70 in fragment 68 points to that one of the records 50 of dictionary 40 containing the desired value of Ui. The value PN+1 as computed at block 84 is stored in the projection value 72 of the new fragment {iN+1, PN+1}. Thus, the new fragment contains a pointer (i) to the desired record of dictionary 40 and a corresponding projection value PN+1. To achieve further loss compression, the value of PN+1 can be quantized by mapping it to a finite set of integer values. This quantization if performed can be part of the encoding loop (as opposed to post processing) to provide better results.
Control then moves to block 88, where the repetition factor is determined for the new fragment record. In this case, the repetition factor can be an integer such as between 1 and 7, which determines how many duplicate copies of the fragment {iN+1, PN+1} will be transmitted as part of output signal Y (
Control then moves to block 90, where the selected one of records 50 of dictionary 40 (that is, the record corresponding to the index (i) and desired value U1) is marked so that it will not be selected again by the encoder for use in creating a fragment. Thus, each entry of dictionary 40 is used only once to generate a fragment 68, although that fragment may be duplicated in the output signal depending on the repetition factor for that fragment.
It will be understood that with each iteration of the process described in
Also, despite redundancy, some fragments in fragment list 42 that are included in output signal Y may be lost during transmission on forward link 26 and therefore such lost fragments will not be included in input fragment list 62. Thus, in an exemplary case, if there are no duplicate copies of fragments in output signal Y and no transmission losses, fragment list 62 can be identical to fragment list 42 after encoding is complete. It should be noted that transmission of fragment list 42 (as included in output signal Y) can be performed on a frame-by-frame basis so that the entire video or audio stream need not be encoded before fragments are transmitted.
Control next moves to a block 96, where the amplitude Ak for each kth fragment is determined based upon the value Pk contained in the kth fragment's projection field 72. One exemplary technique for computing amplitude Ak is to solve the following linear system for the unknown coefficient Aj
where the index k refers to the kth entry in deduplicated fragment list 62. Other techniques can be used to find the amplitude.
Control next moves to a block 98, where the reconstructed signal REC is determined using the values of amplitude Ak. One exemplary technique for computing REC is to compute the sum of the dictionary values Ui,k weighted by the corresponding amplitudes in accordance with the following equation:
where k ranges from 1 to N, the total number of fragments, Ak is determined by solving the linear system described in Equation 3 above; and Ui,k is the dictionary value at that one of dictionary records 50 to which the kth fragment's index (i) points.
Control next moves to block 100, where processing of
In a physical sense, the amplitude Ak associated with each fragment 68 is a measure of the amplitude of that fragment's normalized dictionary value Ui,k. The projection Pk is a transform of that amplitude value. By transmitting the projection values Pk, more robust communications may be provided than if, for example, the amplitudes Ai themselves were transmitted. This is because the loss of an amplitude Ak value can result in a more severe degradation of the reconstructed signal REC than the loss of a projection value Pk.
The functions of encoder 32 and decoder 60 can be implemented as an application computer program or other software that can be executed a computing device such as processing unit 28. Alternatively, the logic of encoder 32 and decoder 60 implemented in hardware such as in firmware or on an ASIC or other specialized chip, or in a combination of hardware and software. All or a portion of embodiments of the present invention can take the form of a computer program product accessible from, for example, a computer-usable or computer-readable medium. A computer-usable or computer-readable medium can be any signal device that can, for example, tangibly contain, store, communicate, or transport the program for use by or in connection with any processor. The medium can be, for example, an electronic, magnetic, optical, electromagnetic, or a semiconductor device. Other suitable mediums are also available.
The above-described embodiments have been described in order to allow easy understanding of the present invention and do not limit the present invention. On the contrary, the invention is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structure as is permitted under the law.
This application is a continuation of U.S. patent application Ser. No. 13/233,640, filed Sep. 15, 2011, the disclosure of which is hereby incorporated by reference in its entirety, which in turn is a non-provisional application claiming the benefit of on U.S. Pat. Appl. Ser. No. 61/383,526 filed Sep. 16, 2010, the disclosure of which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5231484 | Gonzales et al. | Jul 1993 | A |
5473326 | Harrington | Dec 1995 | A |
5589945 | Abecassis | Dec 1996 | A |
5606371 | Klein Gunnewiek et al. | Feb 1997 | A |
5659539 | Porter et al. | Aug 1997 | A |
5696869 | Abecassis | Dec 1997 | A |
5793647 | Hageniers et al. | Aug 1998 | A |
5828370 | Moeller et al. | Oct 1998 | A |
5903264 | Moeller et al. | May 1999 | A |
5910827 | Kwan et al. | Jun 1999 | A |
5913038 | Griffiths | Jun 1999 | A |
5930493 | Ottesen et al. | Jul 1999 | A |
5943065 | Yassaie et al. | Aug 1999 | A |
5963203 | Goldberg et al. | Oct 1999 | A |
6011824 | Oikawa et al. | Jan 2000 | A |
6014706 | Cannon et al. | Jan 2000 | A |
6047255 | Williamson | Apr 2000 | A |
6052159 | Ishii et al. | Apr 2000 | A |
6061821 | Schlosser | May 2000 | A |
6112234 | Leiper | Aug 2000 | A |
6119154 | Weaver et al. | Sep 2000 | A |
6134352 | Radha et al. | Oct 2000 | A |
6185363 | Dimitrova et al. | Feb 2001 | B1 |
6253249 | Belzile | Jun 2001 | B1 |
6266337 | Marco | Jul 2001 | B1 |
6404817 | Saha et al. | Jun 2002 | B1 |
6452950 | Ohlsson et al. | Sep 2002 | B1 |
6453283 | Gigi | Sep 2002 | B1 |
6510219 | Wellard et al. | Jan 2003 | B1 |
6512795 | Zhang et al. | Jan 2003 | B1 |
6587985 | Fukushima et al. | Jul 2003 | B1 |
6590902 | Suzuki et al. | Jul 2003 | B1 |
6597812 | Fallon | Jul 2003 | B1 |
6636561 | Hudson | Oct 2003 | B1 |
6665317 | Scott | Dec 2003 | B1 |
6683889 | Shaffer et al. | Jan 2004 | B1 |
6684354 | Fukushima et al. | Jan 2004 | B2 |
6700893 | Radha et al. | Mar 2004 | B1 |
6707852 | Wang | Mar 2004 | B1 |
6721327 | Ekudden et al. | Apr 2004 | B1 |
6732313 | Fukushima et al. | May 2004 | B2 |
6747999 | Grosberg et al. | Jun 2004 | B1 |
6778553 | Chou | Aug 2004 | B1 |
6792047 | Bixby et al. | Sep 2004 | B1 |
6859460 | Chen | Feb 2005 | B1 |
6885986 | Gigi | Apr 2005 | B1 |
6918077 | Fukushima et al. | Jul 2005 | B2 |
6934258 | Smith et al. | Aug 2005 | B1 |
7003039 | Zakhor et al. | Feb 2006 | B2 |
7068710 | Lobo et al. | Jun 2006 | B2 |
7092441 | Hui et al. | Aug 2006 | B1 |
7096481 | Forecast et al. | Aug 2006 | B1 |
7124333 | Fukushima et al. | Oct 2006 | B2 |
7180896 | Okumura | Feb 2007 | B1 |
7180901 | Chang et al. | Feb 2007 | B2 |
7263644 | Park et al. | Aug 2007 | B2 |
7271747 | Baraniuk et al. | Sep 2007 | B2 |
7295137 | Liu et al. | Nov 2007 | B2 |
7356750 | Fukushima et al. | Apr 2008 | B2 |
7372834 | Kim et al. | May 2008 | B2 |
7376880 | Ichiki et al. | May 2008 | B2 |
7379068 | Radke | May 2008 | B2 |
7406501 | Szeto et al. | Jul 2008 | B2 |
7447235 | Luby et al. | Nov 2008 | B2 |
7447969 | Park et al. | Nov 2008 | B2 |
7484157 | Park et al. | Jan 2009 | B2 |
7502818 | Kohno et al. | Mar 2009 | B2 |
7504969 | Patterson et al. | Mar 2009 | B2 |
7636298 | Miura et al. | Dec 2009 | B2 |
7653867 | Stankovic et al. | Jan 2010 | B2 |
7680076 | Michel et al. | Mar 2010 | B2 |
7719579 | Fishman et al. | May 2010 | B2 |
7756127 | Nagai et al. | Jul 2010 | B2 |
RE42272 | Zakhor | Apr 2011 | E |
8050446 | Kountchev et al. | Nov 2011 | B2 |
8326061 | Massimino | Dec 2012 | B2 |
8352737 | Solis et al. | Jan 2013 | B2 |
8477050 | Massimino | Jul 2013 | B1 |
20020034245 | Sethuraman et al. | Mar 2002 | A1 |
20020099840 | Miller et al. | Jul 2002 | A1 |
20020140851 | Laksono | Oct 2002 | A1 |
20020157058 | Ariel et al. | Oct 2002 | A1 |
20020159525 | Jeong | Oct 2002 | A1 |
20020167911 | Hickey | Nov 2002 | A1 |
20030018647 | Bialkowski | Jan 2003 | A1 |
20030058943 | Zakhor | Mar 2003 | A1 |
20030098992 | Park et al. | May 2003 | A1 |
20030103681 | Guleryuz | Jun 2003 | A1 |
20030193486 | Estrop | Oct 2003 | A1 |
20030210338 | Matsuoka et al. | Nov 2003 | A1 |
20030226094 | Fukushima et al. | Dec 2003 | A1 |
20040017490 | Lin | Jan 2004 | A1 |
20040146113 | Valente | Jul 2004 | A1 |
20040165585 | Imura et al. | Aug 2004 | A1 |
20050024384 | Evans et al. | Feb 2005 | A1 |
20050063586 | Munsil et al. | Mar 2005 | A1 |
20050104979 | Fukuoka et al. | May 2005 | A1 |
20050111557 | Kong et al. | May 2005 | A1 |
20050154965 | Ichiki et al. | Jul 2005 | A1 |
20050180415 | Cheung et al. | Aug 2005 | A1 |
20050220444 | Ohkita et al. | Oct 2005 | A1 |
20050259690 | Garudadri et al. | Nov 2005 | A1 |
20050281204 | Karol et al. | Dec 2005 | A1 |
20060150055 | Quinard et al. | Jul 2006 | A1 |
20060164437 | Kuno | Jul 2006 | A1 |
20060200733 | Stankovic et al. | Sep 2006 | A1 |
20060256232 | Noguchi | Nov 2006 | A1 |
20060268124 | Fishman et al. | Nov 2006 | A1 |
20070168824 | Fukushima et al. | Jul 2007 | A1 |
20070189164 | Smith et al. | Aug 2007 | A1 |
20070230585 | Kim et al. | Oct 2007 | A1 |
20070233707 | Osmond et al. | Oct 2007 | A1 |
20070255758 | Zheng et al. | Nov 2007 | A1 |
20070269115 | Wang et al. | Nov 2007 | A1 |
20080005201 | Ting et al. | Jan 2008 | A1 |
20080008239 | Song | Jan 2008 | A1 |
20080046249 | Thyssen et al. | Feb 2008 | A1 |
20080052630 | Rosenbaum et al. | Feb 2008 | A1 |
20080055428 | Safai | Mar 2008 | A1 |
20080065633 | Luo et al. | Mar 2008 | A1 |
20080101403 | Michel et al. | May 2008 | A1 |
20080124041 | Nielsen et al. | May 2008 | A1 |
20080130756 | Sekiguchi et al. | Jun 2008 | A1 |
20080170793 | Yamada et al. | Jul 2008 | A1 |
20080209300 | Fukushima et al. | Aug 2008 | A1 |
20080211931 | Fujisawa et al. | Sep 2008 | A1 |
20080225735 | Qiu et al. | Sep 2008 | A1 |
20080291209 | Sureka et al. | Nov 2008 | A1 |
20090007159 | Rangarajan et al. | Jan 2009 | A1 |
20090052543 | Wu et al. | Feb 2009 | A1 |
20090073168 | Jiao et al. | Mar 2009 | A1 |
20090103606 | Lu et al. | Apr 2009 | A1 |
20090110055 | Suneya | Apr 2009 | A1 |
20090164655 | Pettersson et al. | Jun 2009 | A1 |
20090172116 | Zimmet et al. | Jul 2009 | A1 |
20090213940 | Steinbach et al. | Aug 2009 | A1 |
20090219994 | Tu et al. | Sep 2009 | A1 |
20090249158 | Noh et al. | Oct 2009 | A1 |
20090271814 | Bosscha | Oct 2009 | A1 |
20090284650 | Yu et al. | Nov 2009 | A1 |
20100111489 | Presler | May 2010 | A1 |
20100150441 | Evans et al. | Jun 2010 | A1 |
Number | Date | Country |
---|---|---|
1947680 | Jul 2008 | EP |
WO9611457 | Apr 1996 | WO |
WO9949664 | Sep 1999 | WO |
WO0233979 | Apr 2002 | WO |
WO02062072 | Aug 2002 | WO |
WO02067590 | Aug 2002 | WO |
WO02078327 | Oct 2002 | WO |
WO03043342 | May 2003 | WO |
Entry |
---|
Al-Omari, Huthaifa, et al; “Avoiding Delay Jitter in Cyber-Physical Systems Using One Way Delay Variations Model”, Computational Science and Engineering, 2009 International Conference, IEEE (Aug. 29, 2009) pp. 295-302. |
Bagni, D.—A constant quality single pass vbr control for dvd recorders, IEEE, 2003, pp. 653-662. |
Balachandran, et al., Sequence of Hashes Compression in Data De-duplication, Data Compression Conference, Mar. 2008, p. 505, issue 25-27, United States. |
Begen, Ali C., et al; “An Adaptive Media-Aware Retransmission Timeout Estimation Method for Low-Delay Packet Video”, IEEE Transactions on Multimedia, vol. 9, No. 2 (Feb. 1, 2007) pp. 332-347. |
Begen, Ali C., et al; “Proxy-assisted interactive-video services over networks wit large delays”, Signal Processing: Image Communication, vol. 20, No. 8 (Sep. 1, 2005) pp. 755-772. |
Cui, et al., Opportunistic Source Coding for Data Gathering in Wireless Sensor Networks, IEEE Int'l Conf. Mobile Ad Hoc & Sensor Systems, Oct. 2007, http://caltechcstr.library.caltech.edu/569/01 HoCuiCodingWirelessSensorNetworks.pdf, United States. |
David Slepian and Jack K. Wolf, Noiseless Coding of Correlated Information Sources, IEEE Transactions on Information Theory; Jul. 1973; pp. 471-480; vol. 19, United States. |
Extended European Search Report EP09171120, dated Aug. 2, 2010. |
Feng, Wu-chi; Rexford, Jennifer; “A Comparison of Bandwidth Smoothing Techniques for the Transmission of Prerecorded Compressed Video”, Paper, 1992, 22 pages. |
Friedman, et al., “RTP: Control Protocol Extended Reports (RTPC XR),” Network Working Group RFC 3611 (The Internet Society 2003) (52 pp). |
Fukunaga, S. (ed.) et al., MPEG-4 Video Verification Model VM16, International Organisation for Standardisation ISO/IEC JTC1/SC29/WG11 N3312 Coding of Moving Pictures and Audio, Mar. 2000. |
Ghanbari Mohammad, “Postprocessing of Late Calls for Packet Video”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 6, No. 6, Dec. 1996, 10 pages. |
Gustafsson, F., Adaptive Filtering and Change Detection, John Wile & Sons, LTd, 2000. |
He, Z. et al., A Linear Source Model and a Unified Rate Control Algorithm for DCT Video Coding, IEEE Transactions on Circuits and Systems for Video Technogy, Nov. 22, 2000. |
Implementors' Guide; Series H: Audiovisual and Multimedia Systems; Coding of moving video: Implementors Guide for H.264: Advanced video coding for generic audiovisual services. H.264. International Telecommunication Union. Version 12. Dated Jul. 30, 2010. |
Keesman, G.—Bit-rate control for MPEG encoders, Signal Processing Image communication 6 (1995) 545-560. |
Khronos Group Inc. OpenMAX Integration Layer Application Programming Interface Specification. Dec. 16, 2005, 326 pages, Version 1.0. |
Wang, Yao “Error Control and Concealment for Video Communication: A Review”, Proceedings of the IEEE, vol. 86, No. 5, May 1998, 24 pages. |
Woo-Shik Kim et al: “Enhancements to RGB coding in H.264/MPEG-4 AVC. FRExt”, Internet Citation, Apr. 16, 2005, XP002439981, Retrieved from the internet: URL:ftp3.itu.ch/av-arch/video-site/0504—Bus/VCEG-Z16.doc, retrieved on Jun. 28, 2007 p. 5. |
Laoutaris, Nikolaos, et al; “Intrastream Synchronization for Continuous Media Streams: A Survey of Playout Schedulers”, IEEE Network, IEEE Service Center, vol. 16, No. 3 (May 1, 2002) pp. 30-40. |
Li, A., “RTP Payload Format for Generic Forward Error Correction”, Network Working Group, Standards Track, Dec. 2007, (45 pp). |
Liu, Haining, et al; “On the Adaptive Delay and Synchronization Control of Video Conferencing over the Internet”, International Conference on Networking (ICN) (2004) 8 pp. |
Liu, Hang, et al; “Delay and Synchronization Control Middleware to Support Real-Time Multimedia Services over Wireless PCS Networks”, IEEE Journal on Selected Areas in Communications, IEEE Service Center, vol. 17, No. 9 (Sep. 1, 1999) pp. 1660-1672. |
Nethercote, Nicholas, et al,; “How to Shadow Every Byte of Memory Used by a Program”, Proceedings of the 3rd International Conference on Virtual Execution Environments, Jun. 13-15, 2007 San Diego CA, pp. 65-74. |
Overview; VP7 Data Format and Decoder. Version 1.5. On2 Technologies, Inc. Dated Mar. 28, 2005. |
Page, E. S., “Continuous Inspection Schemes”; Biometrika 4l; Statistical Laboratory, University of Cambridge, (1954); pp. 100-115. |
Roca, Vincent, et al., Design and Evaluation of a Low Density Generator Matrix (LDGM) Large Block FEC Codec, INRIA Rhone-Alpes, Planete project, France, Date Unknown, (12 pp). |
Schulzrinne, H., et al. RTP: A Transport Protocol for Real-Time Applications, RFC 3550. The Internet Society. Jul. 2003. |
Sekiguchi S. et al.: “CE5: Results of Core Experiment on 4:4:4 Coding”, JVT Meeting: Mar. 31, 2006-Jul. 4, 2006 Geneva, CH; (Joint Videoteam of ISO/IEC JTC1/SC29/WG11 and ITU-T Sg. 16), No. JVT-S014, Apr. 1, 2006 pp. 1-19. |
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video. H.264. Advanced video coding for generic audiovisual services. International Telecommunication Union. Version 11. Dated Mar. 2009. |
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video. H.264. Advanced video coding for generic audiovisual services. International Telecommunication Union. Version 12. Dated Mar. 2010. |
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video. H.264. Amendment 2: New profiles for professional applications. International Telecommunication Union. Dated Apr. 2007. |
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video. H.264. Advanced video coding for generic audiovisual services. Version 8. International Telecommunication Union. Dated Nov. 1, 2007. |
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services. H.264. Amendment 1: Support of additional colour spaces and removal of the High 4:4:4 Profile. International Telecommunication Union. Dated Jun. 2006. |
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services. H.264. Version 1. International Telecommunication Union. Dated May 2003. |
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services. H.264. Version 3. International Telecommunication Union. Dated Mar. 2005. |
Sunil Kumar Liyang Xu, Mrinal K. Mandal, and Sethuraman Panchanathan, Error Resiliency Schemes in H.264/AVC Standard, Elsevier J. of Visual Communicatio & Image Representation (Special issue on Emerging H.264/AVC Video Coding Standard), vol. 17 (2), Apr. 2006. |
Wang, et al., Distributed Data Aggregation using Clustered Slepian-Wolf Coding in Wireless Sensor Networks, IEEE International Conference on Communications, Jun. 2007, pp. 3616-3622, United States. |
Tsai, et al., The Efficiency and Delay of Distributed Source Coding in Random Access Sensor Networks, 8th IEEE Wireless Communications and Networking Conference, Mar. 2007, pp. 786-791, United States. |
Vasudev Bhaskaran et al., “Chapter 6: The MPEG Video Standards”, Image and Video Compression Standards—Algorithms & Architectures, Second Edition, 1997, pp. 149-230 Kluwer Academic Publishers. |
VP6 Bitstream & Decoder Specification. Version 1.02. On2 Technologies, Inc. Dated Aug. 17, 2006. |
VP6 Bitstream & Decoder Specification. Version 1.03. On2 Technologies, Inc. Dated Oct. 29, 2007. |
Number | Date | Country | |
---|---|---|---|
61383526 | Sep 2010 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13233640 | Sep 2011 | US |
Child | 13489025 | US |