The present invention relates generally to a method and apparatus for streaming programming content over a broadband network and more particularly to a method and apparatus for streaming programming content to a client device so that there is a minimal delay between the time the client device requests the programming content and the time the programming content can be displayed or otherwise rendered.
A television may access programming content through a variety of transmission technologies such as cable, satellite, or over the air, in the form of analog or digital signals. Such programming may be delivered in accordance with a number of media delivery models including broadcast, multicast and narrowcast models. In addition to the aforementioned technologies, the Internet is emerging as a television content transmission medium. Television that receives content through an Internet network connection via the Internet Protocol (IP) may be generically referred to as IPTV. The Internet network may be the public Internet, a private network operating in accordance with the Internet Protocol, or a combination thereof. IPTV has become a common denominator for systems in which television and/or video signals are distributed to subscribers over a broadband connection using the Internet protocol. In general, IPTV systems utilize a digital broadcast signal that is sent by way of a broadband connection and a set top box (“STB”) that is programmed with software that can handle subscriber requests to access media sources via a television connected to the STB. A decoder in the STB handles the task of decoding received IP video signals and converting them to standard television signals for display on the television. Where adequate bandwidth exists, IPTV is capable of a rich suite of services compared to cable television or the standard over-the-air distribution.
In traditional cable television or over-the-air distribution a user can quickly change channels, resulting in the virtual instantaneous transition from one program to another. And as such, the user does not typically perceive a delay in the presentation of a new program upon tuning to a new program. However, this simple manner of operation does not apply to the delivery of IPTV. In this environment, the client module typically must store a prescribed amount of media information in a buffer before it begins to play the media information to the user. It requires a certain amount of time to fill up this buffer when the user first connects to a stream of media information. Further, digital media information is commonly expressed as a series of key frames (e.g., I frames) and difference frames (e.g., B and P frames). A client module must wait for a key frame before it begins to present the media information. As a result of these factors, there will be a noticeable lag prior to the presentation of programs as the user switches from one channel to the next. This lag may last as long as several seconds, which is an unacceptably long delay in comparison to traditional cable or over-the-air distribution, which typically requires less than one frame time of about 33 msec to switch analog channels and less than about half a second for digital broadcast channels.
In accordance with one example of the invention, a method is provided that is performed by a client device such a set top box when a viewer requests a program by initiating a channel change from a program guide or entering a channel through the user interface. In this example the client device receives the user request and, in response, the client device transmits the request to the streaming server in the headend, which causes the streaming server to create a unicast catch up stream that commences with a key frame. The streaming server calculates the end point of the catch up stream and continues to send the catch up stream at a rate faster than real time. The client device receives the catch up stream and begins buffering it. While the catch up stream is being buffered the client device begins decoding and presenting the content. The client device receives the end of stream marker, and in response, sends a request to join the multicast stream. Once the client device starts receiving the multicast stream, the client device discards any remaining images or pictures in the catch up stream that precede the synchronization time. The client device also begins to buffer the multicast stream as it continues to play the buffered catch up stream. When it reaches the end of the catch up stream, the client device begins to play out the buffered multicast stream. Because the catch up and multicast stream share a common time base and GOP structure this transition is seamless to the viewer other than an increase in picture quality.
In accordance with another example of the invention, a headend is provided for a broadband network. The headend includes a scalable encoder for receiving programming content and generating a scalably encoded programming stream therefrom. The scalably encoded programming stream including a base layer and an enhancement layer. The headend also includes a streaming server for receiving the scalably encoded programming stream. The streaming server, responsive to a user request for the programming content, is configured to output for transmission over the broadband network a catch up bit stream at a bit rate at which the scalably encoded programming stream is encoded. The catch up stream includes the base layer but not the enhancement layer.
In accordance with yet another example of the invention, a set top box, is provided. The set top box includes a user interface for requesting a selected program for receipt over a broadband network. The set top box also includes a front-end for receiving at a target bit rate over the broadband network an initial portion and a remaining portion of the selected program. The initial portion is arranged as a unicast stream that includes a reduced bit rate representation of the requested program. The reduced bit rate representation is encoded at a reduced bit rate relative to a normal representation of the requested programming that is encoded for transmission over the broadband network at the target bit rate. The set to box also includes a buffer for buffering at least the unicast stream, a decoder for decoding the unicast stream received from the buffer at the target bit rate, and a processor operatively associated with the user interface, the front-end, the buffer and the decoder.
Broadband access network 210 may employ any suitable network-level protocols to provide communication among the various networked devices. While the IP protocol suite is used in the particular implementations described herein, other standard and/or proprietary communication protocols are suitable substitutes. For example, X.25, ARP, RIP, UPnP or other protocols may be appropriate in particular installations.
Broadband access network 210 includes all routers, switches, long haul and metropolitan transport, and access systems necessary for transporting the video streams and the associated management and license data. Thus, network 210 supports transport of video-on-IP unicast and multicast content, and could be IP router and switch based, where IP multicast replication is accomplished by core and edge routers.
For large amounts of data to be distributed to a large number of subscribers over a packet switched network, IP (or other network-level) multicasting is more efficient than normal Internet transmissions because a server can broadcast data/messages to many recipients simultaneously. Unlike traditional Internet traffic that requires separate connections (single-cast addressing) for each source—destination pair, IP multicasting allows many recipients to share the same source. This means that just one set of packets is transmitted for all destinations. To receive a multicast, a subscriber listens to a specific IP address on a multicast-enabled network, like tuning a television to a specific channel. Multicast broadcast is particularly suitable for distribution of multimedia (video, audio, data) content. When the IP suite is employed, the content is generally transmitted as an MPEG packet stream on a pre-established UDP port and the MPEG packets are encapsulated in UDP/IP datagrams.
Internet Group Management Protocol (IGMP) is defined in RFC 1112 as the Internet standard for IP multicasting. IGMP establishes host memberships in particular multicast groups on a single network and allows a host (e.g., client device 230) to inform its local router using a multicast join request that it wants to receive data addressed to a specific multicast group. The edge routers of network 210 are provided with IGMP (Internet Group Management Protocol) to enable IGMP switching for IP Multicasts, Broadcast Television and the special IP multicast information. QoS for subscriber services is implemented using IP queuing in the edge router. For example, highest priority may be given to Video on Demand and Broadcast Video while lower priority is given to High Speed Internet. The edge routers may also be enabled with static routing of the most popular broadcast channels to improve channel change times.
When changing channels in an IPTV environment using a multicast join request, there is generally a perceptible delay of up to several seconds between the time at which the channel is changed and the time when the content can be decoded and displayed by the client device. This delay arises from the time required to join the appropriate multicast stream and the time required to receive a key frame such as an I frame. This problem can be overcome by first creating a short-lived transport stream that uses a reduced bit rate representation of the content sent at an accelerated pace that allows the client device to immediately begin decoding and displaying the content. The short-lived transport stream, also referred to herein as a catch up stream, is unicast to the client device. The client device uses the unicast catch up stream until it catches up with the multicast stream at some time after the client device has joined the multicast stream. At that time the catch up stream terminates and the client device begins decoding and displaying the content from the multicast stream. The catch up stream is constrained to use no more bandwidth than the multicast stream and in many cases will use the same bandwidth as the multicast stream.
In some cases both the catch up stream and the multicast stream can be made available to the streaming server 208 in headend 220 in a common transport stream using, for example, scalable coding techniques. That is, scalable coding techniques can be used to create the catch up stream from a single instance of the content. Scalable coding generates multiple layers, for example a base layer and an enhancement layer, for the encoding of video data. The base layer typically has a lower bit rate and lower spatial resolution and quality, while the enhancement layer increases the spatial resolution and quality of the base layer, thus requiring a higher bit rate. The enhancement layer bitstream is only decodable in conjunction with the base layer, i.e. it contains references to the decoded base layer video data which are used to generate the final decoded video data.
Scalable encoding has been accepted for incorporation into established standards, including the ITU-T.H.264 standard and its counterpart, ISO/IEC MPEG-4, Part 10, i.e., Advanced Video Coding (AVC). More specifically, the scalable encoding recommendations that are to be incorporated into the standards may be found in ITU-T Rec. H.264|ISO/IEC 14496-10/Amd.3 Scalable video coding 2007/11, currently published as document JVT-X201 of the Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG (ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q.6). In some cases the catch up and the multicast stream may operate in accordance with these standards and recommendations, although other scalable encoding syntaxes and techniques may be used as well.
The target bit rate Rt that is needed to encode the normal multicast stream, which includes the base layer and the enhancement layer, is:
Rt=Rvbase+Rvenh+Raudio+Rsys
where Rvbase is the base video layer, Rvenh is the enhancement video layer, Raudio is the audio channel(S) and Rsys is any necessary system overhead.
For the catch up stream, the catch up bit rate Rc only uses the video base layer in conjunction with the audio and system overhead.
Rc=Rvbase+Raudio+Rsys
The relationship between the target bit rate and the catch up bit rate can be expressed as the Catch Up Ratio, calculated as:
If the catch up stream is transmitted at the target bit rate Rt instead of the catch up bit rate Rc, the catch up ratio can be used with the key frame interval Ik (i.e., the time between successive key frames) to determine how long the unicast catch up stream needs to operate before it reaches temporal synchronization with the multicast stream. In this case the duration Tcatchup of the catch up stream is:
Tcatchup=CR*Ik
The Catch Up Ratio that is selected will require a tradeoff among a number of factors, including: the acceptable video quality of the catch up stream; the time delay between the channel change event and coincidence between the catch up and multicast streams; and the buffering capacity in the set top decoder.
One example of the streaming server 208 shown in
The streaming server 208 includes a memory array 101, an interconnect device 102, and stream server modules 103a through 103n (103). Memory array 101 is used to store the on-demand content and could be many Gigabytes or Terabytes in size. Such memory arrays may be built from conventional memory solid state memory including, but not limited to, dynamic random access memory (DRAM) and synchronous DRAM (SDRAM). The stream server modules 103 retrieve the content from the memory array 101 and generate multiple asynchronous streams of data that can be transmitted to the client devices 230. The interconnect 102 controls the transfer of data between the memory array 101 and the stream server modules 103. The interconnect 102 also establishes priority among the stream server modules 103, determining the order in which the stream server modules receive data from the memory array 101.
The communication process starts with a channel change request being sent from a client device 230 over broadband access network 210. The command for the request arrives over a signal line 114a-114n (114) to a stream server module 103, where the protocol information is decoded. If the request comes in from stream server module 103a, for example, it travels over a bus 117 to a master CPU 107. For local configuration and status updates, the CPU 107 is also connected to a local control interface 106 over signal line 120, which communicates with the system operator over a line 121. Typically this could be a terminal or local computer using a serial connection or network connection.
Control functions, or non-streaming payloads, are handled by the master CPU 107. Program instructions in the master CPU 107 determine the location of the desired content or program material in memory array 101. The memory array 101 is a large scale memory buffer that can store video, audio and other information. In particular, memory array 101 stores scalably encoded content of the type described above. In this manner, the streaming server 208 can provide a variety of content to multiple customer devices simultaneously. Each customer device can receive the same content or different content. The content provided to each customer is transmitted as a unique asynchronous media stream of data that may or may not coincide in time with the unique asynchronous media streams sent to other customer devices.
The amount of memory (in bits) needed in memory array 101 for buffering a portion of a program being received by the backplane interface 104 which is sufficient to generate the catch up stream is:
Buffer Size=(CR+1)*Ik*Rt
If the requested content is not already resident in the memory array 101, a request to load the program is issued over signal line 118, through a backplane interface 105 and over a signal line 119. An external processor or CPU (not shown) responds to the request by loading the requested program content over a backplane line 116, under the control of backplane interface 104. Backplane interface 104 is connected to the memory array 101 through the interconnect 102. This allows the memory array 101 to be shared by the stream server modules 103, as well as the backplane interface 104. The program content is written from the backplane interface 104, sent over signal line 115, through interconnect 102, over signal line 112, and finally to the memory array 101.
When the first block of program material has been loaded into memory array 101, the streaming output can begin. Data playback is controlled by a selected one or more stream server modules 103. If the stream server module 103a is selected, for example, the stream server module 103a sends read requests over signal line 113a, through the interconnect 102, over a signal line 111 to the memory array 101. The CPU 107 informs the stream server module 103a of the actual location of the program material in the memory array. With this information, the stream server module 103a can immediately begin requesting the program stream from memory array 101.
A block of data is read from the memory array 101, sent over signal line 112, through the interconnect 102, and over signal line 113a to the stream server module 103a. Once the block of data has arrived at the stream server module 103a, the transport protocol stack is generated for this block and the resulting primary media stream is sent to the broadband access network 210 over signal line 114a. This process is repeated for each data block contained in the program source material.
When a scalably encoded program stream is received by the streaming server 208 over the backplane interface 104 the master CPU 107 examines the incoming stream to determine the packet ID (PID) types, the location of key frames, bit rate and other pertinent information. In particular, the master CPU 107 distinguishes between the PIDs assigned to packets that carry the base layer and the PIDs assigned to packets that carry the enhancement layer. In this way when one of the stream server modules 103 is generating a catch up steam from the data received from the memory array 101 it can drop the packets having a PID assigned to the enhancement layer.
The master CPU 107 also determines if there are any padding packets (which are assigned a null PID) in the incoming program stream, which are used to maintain timing information. The master CPU 107 calculates what proportion of the padding packets will be included in the catch up stream for the purpose of maintaining a constant catch up ratio. The dropped padding packets will generally be those associated with the enhancement layer, although in some cases additional padding packets may be dropped. The padding packets not used in the catch up stream are assigned to a different PID. During normal playout of the multicast stream using both the base and enhancement layers the stream server modules 103 remaps the padding packets that have been reassigned back to the null PID.
During the time period when the catch up stream is being provided to a client device 230 by the streaming server 208, the stream server module 103 examines the PIDS of the packets in the program stream and drops those packets belonging to the video enhancement layer and the padding packets that have been selected to be dropped. After the appropriate packets are dropped the stream server module 103 streams the catch up stream to the client device at the normal target bit rate Rt. In addition, the stream server module 103 replaces the existing program map table (PMT) with a new PMT that excludes the packets associated with the video enhancement layer. The PMT is included with the catch up stream and describes the elementary streams that compose the catch up stream by identifying the PIDs for each stream. The stream server module 103 may also include a marker packet in the catch up stream to indicate the end of the catch up stream.
In some cases the marker packet may be an end of stream packet as described in ISO Joint Video Team (JVT) document JVT-Y083, for example.
The enhancement-layer bit stream 327 is generated as follows. The raw video 301 is down-sampled, DCT transformed, and quantized as described above. The quantized video is reconstructed by inverse quantization module 313 and inverse discrete cosine transform (IDCT) module 315. Then, up-sampling is performed by up-sampling module 317 on the IDCT transformed video. The up-sampled video is subtracted from the raw video 301, and a discrete cosine transformation is performed by DCT module 321 on a residual image which is obtained after subtraction in summing unit 319. The DCT transformed residual image is quantized using a quantization parameter, which is smaller than the quantization parameter used on the quantization module 323. The quantized bits are encoded by VLC module 325 to generate the enhancement layer bit stream 327. The base layer bit stream 311 and the enhancement layer bit stream 327 are then combined into a single bit stream that is forwarded to the streaming server 208.
In some cases, for the base-layer encoding, motion estimation may be performed between the down-sampling module 303 and the DCT module 305, and motion compensation may be performed between the IDCT module 315 and the up-sampling module 317. Likewise, for the enhancement-layer encoding, motion estimation may be performed between the summing unit 319 and the DCT module 321.
The buffer 470 should be sufficiently large to at least buffer the catch up stream. Accordingly, the minimum buffer size is
BufferCD=CR*Ik*Rt
The buffer 470 receives the catch up bit stream from the front end network interface 430 and paces it out to the scalable decoder 480 at a rate based on a time base such as the Program Clock Reference (PCR) which is included with the bit stream. The buffer 470 may also be used to perform PCR-based dejittering so that the received PCRs accurately reflects the original time base of the program. If the buffer 470 is used to dejitter its capacity needs to be the larger of the catch up buffer size specified above and the dejitter capacity. The processor 450 can be used to recognize an end of stream marker (if employed) and initiate a multicast join request to receive the multicast stream.
In the implementations described above the catch up stream and the normal multicast stream are both provided to the streaming server 208 in a single scalably encoded program stream. In other implementation, however, the catch up stream and the normal multicast stream are provided to the streaming server 208 as two separate streams in parallel with one another. While the two streams may be independent of one another, to simplify processing they may be constrained so that they have identical Group of Pictures (GOP) structures and timestamps to facilitate switching between catch up and live streams. Alternatively, if their timestamps are not the same, the streaming server 208 could be used to recognize that the streams are related and adjust the time stamps on the catch up stream, assuming that their GOP structures still match. In any case, both streams would be buffered in the memory array 101 of the streaming server 208 with the same buffer requirements described above, but replicated for each copy. During the catch up process when a client device first requests a program, the catch up stream would be unicast to the client devices at the target bit rate of the normal multicast stream in the manner described above. The client device would not require a scalable decoder, but would use its standard decoders to decode both the unicast and the multicast streams, typically in accordance with either MPEG-2 or H.264.
In yet another implementation the catch up stream may be generated from a previously encoded stream that is fed through a rate shaper or “smart transcoder.” A smart transcoder uses the existing GOP structure and motion vectors to perform a rapid bit rate reduction without decoding and re-encoding the content. The resulting catch up stream would have the same GOP structure as the normal multicast stream.
The processes described above, including but not limited to those shown in
A method and apparatus has been described for streaming programming content to a client device so that there is a minimal delay between the time the client device requests the programming content and the time the programming content can be displayed or otherwise rendered. This is accomplished by transmitting to the client device a reduced bit rate representation of the requested programming as a unicast stream while the client device awaits receipt of the higher bit rate multicast stream.
Number | Name | Date | Kind |
---|---|---|---|
5361091 | Hoarty et al. | Nov 1994 | A |
5421031 | De Bey | May 1995 | A |
5528282 | Voeten et al. | Jun 1996 | A |
5532748 | Naimpally | Jul 1996 | A |
5633683 | Rosengren et al. | May 1997 | A |
5659539 | Porter et al. | Aug 1997 | A |
5682597 | Ganek et al. | Oct 1997 | A |
5684799 | Bigham et al. | Nov 1997 | A |
5686965 | Auld | Nov 1997 | A |
5701582 | DeBey | Dec 1997 | A |
5719632 | Hoang et al. | Feb 1998 | A |
5724646 | Ganek et al. | Mar 1998 | A |
5732217 | Emura | Mar 1998 | A |
5748229 | Stoker | May 1998 | A |
5864682 | Porter et al. | Jan 1999 | A |
5884141 | Inoue et al. | Mar 1999 | A |
5909224 | Fung | Jun 1999 | A |
5933193 | Niesen | Aug 1999 | A |
5949410 | Fung | Sep 1999 | A |
6112226 | Weaver et al. | Aug 2000 | A |
6138147 | Weaver et al. | Oct 2000 | A |
6181334 | Freeman et al. | Jan 2001 | B1 |
6310652 | Li et al. | Oct 2001 | B1 |
6317459 | Wang | Nov 2001 | B1 |
6317784 | Mackintosh et al. | Nov 2001 | B1 |
6334217 | Kim | Dec 2001 | B1 |
6415326 | Gupta et al. | Jul 2002 | B1 |
6480539 | Ramaswamy | Nov 2002 | B1 |
6510177 | De Bonet et al. | Jan 2003 | B1 |
6519011 | Shendar | Feb 2003 | B1 |
6519693 | Debey | Feb 2003 | B1 |
6526580 | Shimomura et al. | Feb 2003 | B2 |
6535920 | Parry et al. | Mar 2003 | B1 |
6611624 | Zhang et al. | Aug 2003 | B1 |
6637031 | Chou | Oct 2003 | B1 |
6728317 | Demos | Apr 2004 | B1 |
6745715 | Shen et al. | Jun 2004 | B1 |
6771644 | Brassil et al. | Aug 2004 | B1 |
6850965 | Allen | Feb 2005 | B2 |
6870887 | Kauffman et al. | Mar 2005 | B2 |
6985570 | Hasemann | Jan 2006 | B2 |
7058721 | Ellison et al. | Jun 2006 | B1 |
7096481 | Forecast et al. | Aug 2006 | B1 |
7116714 | Hannuksela | Oct 2006 | B2 |
7143432 | Brooks et al. | Nov 2006 | B1 |
7149410 | Lin et al. | Dec 2006 | B2 |
7164714 | Martin | Jan 2007 | B2 |
7218635 | Haddad | May 2007 | B2 |
7298966 | Nakatani et al. | Nov 2007 | B2 |
7366241 | Matsui | Apr 2008 | B2 |
7369610 | Xu et al. | May 2008 | B2 |
7430222 | Green et al. | Sep 2008 | B2 |
7444419 | Green | Oct 2008 | B2 |
7469365 | Kawahara et al. | Dec 2008 | B2 |
7562375 | Barrett et al. | Jul 2009 | B2 |
7587737 | Baldwin et al. | Sep 2009 | B2 |
7636934 | Baldwin et al. | Dec 2009 | B2 |
7720432 | Colby et al. | May 2010 | B1 |
7788393 | Pickens et al. | Aug 2010 | B2 |
7885270 | Frink et al. | Feb 2011 | B2 |
20020016961 | Goode | Feb 2002 | A1 |
20020073402 | Sangavarapu et al. | Jun 2002 | A1 |
20020147979 | Corson | Oct 2002 | A1 |
20020166119 | Cristofalo | Nov 2002 | A1 |
20020168012 | Ramaswamy | Nov 2002 | A1 |
20020184637 | Perlman | Dec 2002 | A1 |
20030012280 | Chan | Jan 2003 | A1 |
20030053476 | Sorenson et al. | Mar 2003 | A1 |
20030093543 | Cheung et al. | May 2003 | A1 |
20030098869 | Arnold et al. | May 2003 | A1 |
20030103613 | Hasemann | Jun 2003 | A1 |
20030128765 | Yoshigahara | Jul 2003 | A1 |
20030208768 | Urdang et al. | Nov 2003 | A1 |
20040034863 | Barrett et al. | Feb 2004 | A1 |
20040034864 | Barrett et al. | Feb 2004 | A1 |
20040049793 | Chou | Mar 2004 | A1 |
20040064497 | Debey | Apr 2004 | A1 |
20040146205 | Becker et al. | Jul 2004 | A1 |
20040160974 | Read et al. | Aug 2004 | A1 |
20040223739 | Suzuki et al. | Nov 2004 | A1 |
20040231004 | Seo | Nov 2004 | A1 |
20040255328 | Baldwin et al. | Dec 2004 | A1 |
20050039219 | Cooper et al. | Feb 2005 | A1 |
20050055730 | Daniels | Mar 2005 | A1 |
20050060755 | Daniels | Mar 2005 | A1 |
20050060756 | Daniels | Mar 2005 | A1 |
20050081244 | Barrett et al. | Apr 2005 | A1 |
20050089035 | Klemets et al. | Apr 2005 | A1 |
20050097596 | Pedlow | May 2005 | A1 |
20050099869 | Crinon et al. | May 2005 | A1 |
20050120131 | Allen | Jun 2005 | A1 |
20050135477 | Zhang et al. | Jun 2005 | A1 |
20050174352 | Gabrani et al. | Aug 2005 | A1 |
20050190781 | Green et al. | Sep 2005 | A1 |
20050210145 | Kim et al. | Sep 2005 | A1 |
20050232587 | Strawn et al. | Oct 2005 | A1 |
20050254649 | Demos | Nov 2005 | A1 |
20050262531 | Varma et al. | Nov 2005 | A1 |
20050265374 | Pelt | Dec 2005 | A1 |
20060018379 | Cooper | Jan 2006 | A1 |
20060020995 | Opie et al. | Jan 2006 | A1 |
20060075428 | Farmer et al. | Apr 2006 | A1 |
20060075446 | Klemets et al. | Apr 2006 | A1 |
20060075449 | Jagadeesan et al. | Apr 2006 | A1 |
20060080724 | Vermeiren et al. | Apr 2006 | A1 |
20060083263 | Jagadeesan et al. | Apr 2006 | A1 |
20060117358 | Baldwin et al. | Jun 2006 | A1 |
20060117359 | Baldwin et al. | Jun 2006 | A1 |
20060126667 | Smith et al. | Jun 2006 | A1 |
20060136581 | Smith | Jun 2006 | A1 |
20060140276 | Boyce et al. | Jun 2006 | A1 |
20060143669 | Cohen | Jun 2006 | A1 |
20060182052 | Yoon et al. | Aug 2006 | A1 |
20060184973 | de Heer et al. | Aug 2006 | A1 |
20060218281 | Allen | Sep 2006 | A1 |
20060224666 | Allen | Oct 2006 | A1 |
20060224768 | Allen | Oct 2006 | A1 |
20060242240 | Parker et al. | Oct 2006 | A1 |
20060244936 | Ozawa | Nov 2006 | A1 |
20060262985 | Chen et al. | Nov 2006 | A1 |
20070009039 | Ryu | Jan 2007 | A1 |
20070040818 | Aoyanagi et al. | Feb 2007 | A1 |
20070107026 | Sherer et al. | May 2007 | A1 |
20070121629 | Cuijpers et al. | May 2007 | A1 |
20070126853 | Ridge et al. | Jun 2007 | A1 |
20070130596 | Wirick | Jun 2007 | A1 |
20070133609 | Moore et al. | Jun 2007 | A1 |
20070222728 | Koyama et al. | Sep 2007 | A1 |
20070230564 | Chen et al. | Oct 2007 | A1 |
20070266398 | Vandaele | Nov 2007 | A1 |
20070280298 | Hearn et al. | Dec 2007 | A1 |
20070280354 | Park et al. | Dec 2007 | A1 |
20080028275 | Chen et al. | Jan 2008 | A1 |
20080109557 | Joshi et al. | May 2008 | A1 |
20080120553 | Bergman et al. | May 2008 | A1 |
20080127258 | Walker et al. | May 2008 | A1 |
20080273698 | Manders et al. | Nov 2008 | A1 |
20090010273 | Green et al. | Jan 2009 | A1 |
20090064242 | Cohen et al. | Mar 2009 | A1 |
20090077255 | Smith et al. | Mar 2009 | A1 |
20090198827 | Hughes | Aug 2009 | A1 |
20090245393 | Stein et al. | Oct 2009 | A1 |
20090307732 | Cohen et al. | Dec 2009 | A1 |
20090322962 | Weeks | Dec 2009 | A1 |
20110072463 | Zaslavsky et al. | Mar 2011 | A1 |
20110162024 | Jagadeesan et al. | Jun 2011 | A1 |
20110221959 | Ben Yehuda et al. | Sep 2011 | A1 |
Number | Date | Country |
---|---|---|
1294193 | Mar 2003 | EP |
1487215 | Dec 2004 | EP |
1523190 | Apr 2005 | EP |
10-2006-0110104 | Oct 2006 | KR |
2004114667 | Dec 2004 | WO |
2004114668 | Dec 2004 | WO |
2008044916 | Apr 2008 | WO |
2009097230 | Aug 2009 | WO |
2010014210 | Feb 2010 | WO |
Entry |
---|
PCT International Search Report, RE: PCT Application #PCT/US09/31953, Jan. 26, 2009. |
Mark Oliver, Tutorial: The H.264 Scalable Video Codec (SVC), downloaded at: http://www.eetimes.com/design/signal-processing-dsp/4017613/Tutorial-The-H-264-Scalable-Video-Codec-SVC, dated Mar. 10, 2008, 5 pages. |
EPC Extended Search Report, RE: Application #09706287.1-2223/2248342 PCT/US2009031953; Dec. 19, 2011. |
American National Standard ANSI/SCTE 35 2004, “Digital Program Insertion Cueing Message for Cable,” Society of Cable Telecommunications Engineers, Inc., Exton, PA, USA (www.scte.org). |
Azad, S.A., et al., “A novel batched multicast patching scheme for video broadcasting with low user delay,” Signal Processing and Information Technology, 2003. ISSPIT 2003. Proceedings of the 3rd IEEE International Symposium on Darmstadt, Germany, pp. 339-342 (2003). |
Chiariglione, L., “Short MPEG-2 description,” accessed at http://web.archive.org/web/20101010044824/http://mpeg.chiariglione.org/standards/mpeg-2/mpeg-2.htm, accessed on Dec. 20, 2012, pp. 4. |
Hurst, N., and Cornog, K., “MPEG Splicing: A New Standard for Television—SMPTE312M,” SMPTE Journal, vol. 107, No. 11, pp. 978-988, Society of Motion Picture and Television Engineers, Inc (1998). |
Johns, W., “A Better TV Experience with Microsoft Mediaroom Instant Channel Change,” Word Press Ops Vault (2010). |
Poon, W-F., et al. “Batching policy for video-on-demand in multicast environment,” Electronics Letters, IEEE ,vol. 36, No. 15, pp. 1329-1330 (2000). |
Rajah, R., “Optimizing Channel Change Time,” Cisco Systems, Inc., pp. 1-22 (2008). |
Schulzrinne, H., et al., “Real-Time Streaming Protocol (RTSP),” Request for Comments RFC2326, Network Working Group, pp. 1-92, The Internet Society (1998). |
Wave 7 Optics, Inc., “Channel Change Speed in IPTV Systems,” White Paper, dated Jun. 3, 2004. |
Wee, S.J., and Vasudev, B., “Splicing MPEG Video Streams in the Compressed Domain,” IEEE Workshop on Multimedia Signal Processing, pp. 225-230 (1997). |
Zhu, X., et al., “Video Multicast over Wireless Mesh Networks with Scalable Video Coding (SVC),” Visual Communications And Image Processing, Proceedings of the SPIE, vol. 6822, pp. 682205-682205-8 (2008). |
Number | Date | Country | |
---|---|---|---|
20090198827 A1 | Aug 2009 | US |