This disclosure relates in general to changing channels in a digital video environment and in particular, by way of example but not limitation, to reducing the video presentation latency when changing from one video stream to another video stream in a digital unicast network.
Television-based entertainment systems are expanding the programming and services that they offer. In addition to television programming content such as that found on broadcast and traditional cable networks, television service providers are adding on-demand video, as well as other interactive services, features, and applications. The existence of these specific services, features, and applications, as well as the continuing increase in the breadth of available general programming content, drives the adoption of digital network technology for television-based entertainment systems.
Digital technology enables satellite and cable operators to increase the number and kinds of services that they offer to subscribers and thus their average revenue per subscriber. Unfortunately, although digital technology offers many advantages to subscribers as compared to traditional analog networks, it also has a number of drawbacks. For example, changing channels in a digital television service takes two to three seconds. This channel changing latency annoys and frustrates users of the digital television service.
This and other drawbacks of digital technology lead to higher rates of subscriber chum, which means that a large percentage of subscribers that try digital television service switch back to traditional analog service within a short time period. Switching subscribers from analog to digital service involves expenditures for network operators that range from broad, general marketing costs down to individual incentives and installation expenses. Consequently, reducing subscriber churn can financially benefit satellite and cable operators.
Accordingly, for television-based entertainment systems, there is a need for schemes and techniques to reduce the churn out of digital service and back to traditional analog service that results from subscribers being dissatisfied with the slow channel changing experienced with digital television service.
Fast channel changing in a digital-television-based entertainment network can be implemented, for example, by electing to tune to a channel at an opportune tuning time. In an exemplary implementation, a method includes: receiving a channel change request that indicates a requested new channel from a client device; preparing a broadcast video data stream of the requested new channel that is offset in time behind a current broadcast time for broadcast video data of the requested new channel; and streaming the broadcast video data stream to the client device responsive to the channel change request.
In another exemplary implementation, a system includes: a storage device that retains broadcast video data for multiple channels; a video data extractor that accesses the retained broadcast video data and retrieves an intra frame of broadcast video data that is in the past for a requested channel of the multiple channels; and a video data distributor that receives the retrieved intra frame of broadcast video data and transmits the retrieved intra frame of broadcast video data. The system may also include a video data booster that accesses the retained broadcast video data and retrieves a broadcast video data stream that follows the retrieved intra frame of broadcast video data, wherein the video data distributor further receives the retrieved broadcast video data stream and transmits the retrieved broadcast video data stream.
Other method, system, and arrangement implementations are described herein.
The same numbers are used throughout the drawings to reference like and/or corresponding aspects, features, and components.
Headend 104 includes at least one data center 108 that records the broadcast video that is received via transmission media 106 or any other media. The recording can be effectuated while the broadcast video is in a compressed data format, for example, in order to facilitate the ongoing storage of such broadcast video over days, weeks, or even indefinitely. The compression format may comport with a Moving Pictures Expert Group (MPEG) algorithm, such as MPEG-2, MPEG-4, and so forth. Other compression technologies may alternatively be employed, such as Microsoft Windows® Media, Advanced Simple Profile (ASP), Cintak, and so forth.
Headend 104 and a hub 114 may communicate across a network 112. Network 112 can be a fiber ring and may operate under a packet-based protocol, such as an Internet protocol (IP), IP over asynchronous transfer mode (ATM), and so forth. Packets can therefore be communicated between headend 104 and hub 114. Hub 114 may include a cable modem termination system (CMTS) 110B for terminating communications from downstream cable modems. If hub 114 (or another un-illustrated hub) does not include CMTS 110B, headend 104 may include a CMTS 110A for terminating the cable modem communications. Although only one hub 114 is illustrated in architecture 100, headend 104 may provide broadcast video to multiple ones of such hubs 114 via network 112. Headend 104 thus distributes broadcast video over network 112 to one or more hubs 114.
Hub 114 distributes the broadcast video over fiber lines 116 to one or more fiber nodes 118A, 118B . . . 118N. Each fiber node 118 outputs one or more coaxial lines 120, and each such coaxial line 120 includes coaxial line drops to multiple subscriber sites 122A, 122B . . . 122N. Subscriber sites 122A, 122B . . . 122N include client devices 124A, 124B . . . 124N, respectively. Subscriber sites 122 may be homes, businesses, and so forth. Each subscriber site 122 may have multiple such client devices 124 that are each directly or indirectly interfacing with one or more of coaxial lines 120. Client devices 124 may be computers, set-top boxes of varying capabilities, hand-held/portable electronic devices, digital televisions, and so forth. Each client device 124 may include an integrated video screen or may be coupled to a video screen. An exemplary implementation of a client device 124 is described below with reference to
Analog portion 206 typically includes some number of 6 Mhz analog channels. DV portion 208 also includes some number of 6 Mhz channels, but these are dedicated to DV. Each of these 6 Mhz channels can carry multiple DV channels in a compressed format, such as eight (8) regular definition video channels. Although analog downstream communications do typically occupy a predominant fraction of downstream portion 204, spectrum 200 is not necessarily illustrated to scale.
On-demand DV portion 210 is dedicated to providing video in a digital format on request. Hence, this resource can be dynamically allocated among multiple client devices 124. High speed data portion 212 includes data that is transmitted to client devices 124, such as data that is forwarded to client devices 124 in response to previous requests by cable modems thereof using upstream portion 202. Such data may include information that originated from the Internet or similar sources. Other distributions/allocations of spectrum 200 may alternatively be employed. Regardless, it should be understood that the term “digital network” may refer to a digital portion of a combination digital and analog network, depending on the spectrum allocation.
In order for a subscriber to have access to the video, features, and other services provided through the digitally-allocated portion of spectrum 200, the subscriber needs to have subscribed to digital services. The subscriber then uses a client device 124 that is capable of interpreting, decoding, and displaying digital video. The digital video usually provides a picture that is superior to that of analog video, and the digital services are often convenient, informative, and otherwise enjoyable. Nevertheless, a large percentage of new digital subscribers churn out of the digital service because of one or more of the drawbacks of digital service. One such drawback is the lag time when changing to a digital channel, whether the change is from an analog channel or from another digital channel.
Specifically, changing television channels on a digital network takes longer than changing channels on a traditional analog network. When a viewer of analog television is “surfing” through analog channels, the viewer can switch to a new analog channel from a previous analog channel (or a previous digital channel) without experiencing a delay that is sufficiently long so as to be annoying or perhaps even detectable to the viewer. In fact, the delay is usually less than 250 milliseconds in an analog network. However, when a viewer of digital television is “surfing” through digital channels, the delay between when a new digital channel is requested and when the video of the new digital channel is displayed is detectable. Furthermore, the delay is sufficiently long so as to be annoying and even frustrating to the viewer.
When digital video data is transmitted as an MPEG stream, for example, the data is communicated as a series of frames. These frames are either intra frames (I frames) or non-intra frames (non-I frames), with non-I frames including predicted frames (P frames) and bi-directional frames (B frames). I frames are individual stand-alone images that may be decoded without reference to other images (either previous or subsequent). P frames are predicted forward in time; in other words, P frames only depend on a previous image. B frames, on the other hand, can be predicted forward and/or reverse in time.
Because only I frames stand alone in the data stream as reference frames, decoding of an MPEG or similarly constituted data stream needs to start at an I frame. I frames in MPEG-2 data streams for a standard definition digital television channel can arrive as infrequently as every two seconds. Assuming that channel change requests arrive on average somewhere in the middle between two I frames, the average delay time due to waiting for an I frame 306 is approximately one (1) second.
After an I frame is acquired, succeeding (non-I) frames are needed to continue the video presentation. These succeeding frames are applied to a decoding buffer until the decoding buffer is full. More particularly for an MPEG-based decoding process, decoding is not commenced in a broadcast environment until there are a sufficient number of frames in the decoding buffer to ensure that the buffer will not be emptied by the decoding process faster than it is being replenished. Hence, there is an additional delay corresponding to a buffer fill time 308. A typical buffer fill time 308 can last 500-750 milliseconds. These four (4) delay periods 302, 304, 306, and 308 of tuning time 300 can total approximately 2-3 seconds, which is a noticeable and annoyingly lengthy time period when channel “surfing”.
There are also similar delays in television-based entertainment networks that utilize MPEG macroblocks for the I, P, and B units of the video data. In such networks, I macroblocks, P macroblocks, and B macroblocks are analogous to the I frames, P frames, and B frames. The various macroblocks are amalgamated to form images of the video. In fact, in a conventional digital channel changing environment for a cable network, the amalgamation is visible as the I macroblocks for an image are received, decoded, and displayed on a screen. The display of the I decoded I macroblocks is reminiscent of a waterfall inasmuch as the decoded I macroblocks appear first toward the top portion of the screen and gradually fill in the remainder of the screen, generally from the top to the bottom.
Network 404 may include one or more other nodes that are upstream of client device 124 in addition to headend 104. For example, hubs 114 (of
Network interfaces 402 and 406 may vary depending on the architecture of network 404. In an exemplary cable network implementation, network interface 402 includes a CMTS (such as CMTS 110A) if there is no other intervening CMTS 110 in network 404, and network interface 406 includes a cable modem. Network interface 402 and/or network interface 406 may also include components for interacting with an IP network, a DSL network, and so forth. These components may include a receiver, a transmitter, a transceiver, etc. that are adapted to interact with the appropriate network.
In an exemplary described implementation, broadcast video distribution from headend 104 to client device 124 is effectuated generally as follows. A point to point IP session is established between headend 104 and client device 124. Broadcast video data 432 for a specific channel is streamed to client device 124 across network 404. Thus, each client device 124 receives its own designated broadcast video data stream according to its corresponding requested channel. As a consequence, each fiber node 118 (of
Using point to point IP sessions eliminates the analog tune time, as well as the channel overhead delay, because there is no analog tuning to a designated frequency channel. Client devices 124 are “tuned” to an IP data source such that the digital “tuning” between channels occurs in the IP domain at headend 104. When changing from a first channel to a second channel, an IP switch (not shown) at headend 104 notes that an IP address of client device 124 is now designated to receive a broadcast video data stream that corresponds to the second channel. Although the analog channel tuning time delay is eliminated, a new delay is introduced as a result of the two-way communication between client device 124 and headend 104. This new delay is described further below.
Client device 124 includes a channel change input handler 428, a video decoder 424, and network interface 406. Video decoder 424 includes a buffer 426 for storing received broadcast video data prior to decoding. Channel change input handler 428 receives a channel change input from a user (not shown) that orders a change to a requested channel. The channel change input may be received from a remote control, a keyboard, a personal digital assistant (PDA) or similar, a touch-sensitive screen, integrated keys, and so forth.
Channel change input handler 428 may be realized as executable instructions and/or hardware, software, firmware, or some combination thereof. Channel change input handler 428 constructs a channel change request 430 in packet form that includes an indicator of the requested channel. Channel change request 430 is provided from channel change input handler 428 to network interface 406 of client device 124 for transmission over network 404.
Network interface 402 of headend 104 receives channel change request 430 via network 404. Network interface 402 provides channel change request 430 to data center 108. Data center 108, in an exemplary implementation, includes a server architecture 408. Server architecture 408 includes a server storage 408A and a server computer 408B. Server storage 408A includes a storage device (not explicitly shown) that comprises mass memory storage, such as a disk-based storage device. Examples of suitable disk-based storage devices/systems include a redundant array of independent/inexpensive disks (RAID), a Fibre Channel storage device, and so forth.
Server storage 408A stores broadcast video data 410. Broadcast video data is broadcast (e.g., from broadcast center 102 (of
Server computer 408B enables access to the retained broadcast video data 410 of server storage 408A. Server computer 408B includes one or more processors 412 and one or more memories 414. Although not shown, server computer 408B may also include other components such as input/output interfaces; a local disk drive; hardware and/or software for encoding, decoding, and otherwise manipulating video data, and so forth. Memory 414 may include a non-volatile memory such as disk drive(s) or flash memory and/or volatile memory such as random access memory (RAM). In an exemplary described implementation, memory 414 includes electronically-executable instructions.
Specifically, memory 414 includes the following electronically-executable instructions: a channel change request handler 422, a video data extractor 416, a video data booster 420, and a video data distributor 418. The electronically-executable instructions of memory 414 may be executed on processor 412 to effectuate functions as described below. In alternative implementations, one or more of channel change request handler 422, video data extractor 416, video data booster 420, and video data distributor 418 may be stored in a memory such that they are hardware encoded for automatic execution and/or for faster execution by a processor 412.
Network interface 402 forwards channel change request 430 to channel change request handler 422. Channel change request handler 422 isolates the requested channel from channel change request 430 and provides the requested channel to video data extractor 416. Video data extractor 416 is responsible, at least partially, for extracting broadcast video data for the requested channel from broadcast video data 410 of server storage 408A. Video data extractor 416 compensates for channel change requests 430 that arrive in between two intra frames by ensuring that the tuning actually takes place at a more opportune time.
In other words, to avoid having to wait for an I frame, the broadcast video data delivery is backed up in time into the past. The delivery of broadcast video data 410 to client device 124 for the requested channel is offset in time behind a current broadcast time of the requested channel. Consequently, the viewer at client device 124 is presented with broadcast video that is prior to a current broadcast time and thus not current, but video presentation lag times during channel “surfing” are reduced.
In exemplary described implementations, I units 502 may correspond to I frames, I macroblocks, and so forth. Non-I units 504 may correspond to P frames, P macroblocks, B frames, B macroblocks, and so forth. Thus, I units 502 may in general be decoded without reference to other units, regardless of the relevant compression algorithm. In other words, an intra unit may refer to any data segment that may be decoded and subsequently displayed without reference to any other data segment, regardless of whether the data segment is compressed in accordance with MPEG in particular or any other coding algorithm in general. Similarly, a complete or intra frame may refer to any data frame that may be decoded and subsequently displayed without reference to any other data frame and that completely fills a designated image area. Such a designated image area may correspond to a full screen, the entirety of any allocated video display space, a full window, and so forth.
I units 502 and non-I units 504 for each digital video channel are received at headend 104 (of
I units 502 arrive from time to time, such as at approximate intervals or every predetermined period, along data stream 500 at headend 104. In between I units 502, a multiple of non-I units 504 arrive along data stream 500. Usually, channel change requests 430 arrive at headend 104 from client devices 124 at times in between two I units 502. Waiting for the next I unit 502 to arrive before beginning video decoding adds, on average, one second of delay to the digital channel tuning time for an MPEG-2 stream. As video decoders evolve and become more bandwidth efficient, this average delay time due to waiting for the next I unit 502 can stretch to five (5) or more seconds.
However, instead of waiting for the arrival of the next I unit 502, video data extractor 416 (of
In other words, video data extractor 416 accesses server storage 408A to retrieve an I unit 502 of broadcast video data 410 that is in the past with respect to a current broadcast time. Specifically, video data extractor 416 accesses a portion of broadcast video data 410 that corresponds to the requested channel of channel change request 430. Video data extractor 416 seeks backward in time (e.g., to the left of channel change request 430 along data stream 500) to locate and then retrieve the most recently received I unit 502 for the requested channel. This I unit 502 is provided to video data distributor 418.
With respect to possible buffer fill time delays, channel changing delays due to a buffer fill time of buffer 426 can be avoided or reduced with video data booster 420. Video data booster 420 receives the requested channel information from channel change request handler 422 or video data extractor 416. Video data booster 420 also receives from video data extractor 416 the location along data stream 500 of the retrieved (e.g., the most-recently-received) I unit 502. Video data booster 420 retrieves a number of immediately-succeeding non-I units 504 from along data stream 500. The number of non-I units 504 are sufficient in size so as to fill buffer 426 of video decoder 424.
Specifically, video data booster 420 accesses stored broadcast video data 410 of server storage 408A at a location that corresponds to the requested channel. Video data booster 420 is aware of the size of buffer 426 of client device 124. Video data booster 420 may be informed of the size requirements of buffer 426 by an operator of headend 104, by client device 124, and so forth. Client device 124 may inform video data booster 420 of this buffer size when client device 124 is connected to network 404, when a point to point session is established, with channel change request 430, and so forth.
Although the physical or allocated size of an actual buffer for video decoder 424 may be of any size, buffer 426 refers to a minimum level or amount of coded broadcast video data that is necessary or preferred to be in reserve when decoding commences. This minimum level or amount may depend on the particular compression/decompression technology employed, and buffer 426 may correspond to any such minimum size or larger. For an exemplary MPEG-2 coding implementation, buffer 426 corresponds to approximately 500 kilobytes. For an exemplary MPEG-4 coding implementation, buffer 426 corresponds to approximately four (4) megabytes. Video data booster 420 thus retrieves non-I units 504, which follow the most-recently-received I unit 502, to a size that is sufficient to fill buffer 426. This retrieval is performed at a boost rate that exceeds the streaming rate for data stream 500. This buffer 426-sized set of non-I units 504 is provided to video data distributor 418.
Consequently, video data distributor 418 accepts the most-recently-received I unit 502 from video data extractor 416 and the multiple non-I units 504 from video data booster 420. Video data distributor 418 provides the most-recently-received I unit 502 and the multiple non-I units 504 of broadcast video data to network interface 402. Network interface 402 transmits the broadcast video data over network 404 as video data packet(s) 432. Client device 124 receives the video data packet(s) 432 via network 404 at network interface 406.
Video data distributor 418 orchestrates the broadcast video data distribution in any desired order. For example, the most-recently-received I unit 502 and the multiple non-I units 504 may be collected at video data distributor 418 and jointly transmitted. Also, the most-recently-received I unit 502 may be transmitted under the control of video data distributor 418 while video data booster 420 is retrieving the multiple non-I units 504 from broadcast video data 410. Other distributions may alternatively be employed.
It should be noted that the electronically-executed instructions of channel change request handler 422, video data extractor 416, video data booster 420, and video data distributor 418 may be combined or otherwise alternatively organized. For example, the electronically-executed instructions of video data distributor 418 may be incorporated into video data extractor 416 and/or video data booster 420.
After network interface 406 of client device 124 receives the broadcast video data for the requested channel, network interface 406 forwards the most-recently-received I unit 502 and the multiple non-I units 504 that follow thereafter of the broadcast video data to video decoder 424. Video decoder 424 decodes the most-recently-received I unit 502 in preparation for rendering the video image on a screen. Video decoder 424 places the multiple non-I units 504 into buffer 426 for subsequent decoding and video presentation on the screen.
Buffer 426 may be realized as a dedicated and/or specialized memory, as part of a memory that is shared for other purposes, and so forth. Although not shown, client device 124 may also include other components and/or executable instructions, such as an operating system, analog tuners, non-volatile memory storage, RAM, audio/video outputs, one or more specialized and/or general-purpose processors, and so forth.
Channel request transmission delay 602 reflects the time for channel change request 430 to be formulated in client device 124 and transmitted to headend 104 across network 404. Video data retrieval delay 604 reflects the time that elapses while server computer 408B retrieves the most-recently-received I unit 502. I unit transmission delay 606 reflects the time for the most-recently-received I unit 502 to be transmitted from headend 104 to client device 124. These three delays 602, 604, and 606 occupy approximately 20, 100, and 100 milliseconds, respectively. There are therefore approximately 220 milliseconds total that elapse between the channel change input from a viewer and the presentation of an initial image.
Fast tuning time 600 also includes buffer boost fill delay 608. Buffer boost fill delay 608 reflects the time required (i) to retrieve from broadcast video data 410 the multiple non-I units 504 that are of a size that is sufficient to fill buffer 426 and (ii) to transmit them from headend 104 to client device 124. The impact of either or both of these parts of buffer boost fill delay 608 may be reduced when they are overlapped in time with one or both of delays 604 and 606.
Buffer boost fill delay 608 is approximately 30 milliseconds, but this time period may vary significantly depending on the available bandwidth. Hence, the entire fast tuning time 600 is approximately 250 milliseconds. Furthermore, even a short buffer boost fill delay 608 may be essentially eliminated if the burst of broadcast video data, after the initial I unit 502, is transmitted at a rate of data delivery that is guaranteed to exceed the playout speed of the video.
In other words, the multiple non-I units 504 may be relatively quickly sent to client device 124 by transmitting them at a rate that exceeds a typical broadcast video data stream consumption rate at client device 124 in order to reduce or eliminate buffer boost fill delay 608. This relatively quick transmission is enabled by “borrowing” transient excess capacity from other subscribers on the same or a different digital channel.
These four streams 702, 704, 706, and 708 of broadcast video data are each allocated a maximum bandwidth 710. The current bandwidth utilization 714 per stream varies depending on the associated video content at any given time. The difference between maximum (allocated) bandwidth 710 and current bandwidth utilization 714 is transient excess bandwidth 712. This transient excess bandwidth 712, which is otherwise underutilized by a given subscriber at any given moment, may be shared by other subscribers when tuning to a new digital channel. In short, transient excess bandwidth 712 is used to fill buffer 426 with the multiple non-I units 504 that follow the most-recently-received I unit 502 at a rate that exceeds the decoding of the video data units by video decoder 424. Hence, presentation of the broadcast video may commence immediately following, or practically immediately following, receipt of the initial I unit 502, thus potentially eliminating buffer boost fill delay 608.
Fast digital channel changing may be described in the general context of electronically-executable instructions. Generally, electronically-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. Fast digital channel changing, as described in certain implementations herein, may be practiced in distributed computing environments where functions are performed by remotely-linked processing devices that are connected through a communications network. Especially in a distributed computing environment, electronically-executable instructions may be located in separate storage media and executed by different processors.
The methods and processes of
At block 802, a channel change input is detected at a time=T at the client device. For example, the client device 124 may receive a command from a subscriber via a remote control to change from a first channel to a second requested channel at a time=T. In response, the client device 124 prepares a channel change request 430. The channel change request 430 includes an indicator of the requested channel and may be in packet form. At block 804, the channel change request is sent to the headend from the client device. For example, the client device 124 may transmit the channel change request 430 to the headend 104 over a network 404, optionally through one or more intermediate upstream nodes such as a fiber node 118 or a hub 114.
At block 806, the channel change request is received at the headend from the client device. For example, the channel change request 430 may be received at a network interface 402 of the headend 104 via the network 404. At block 808, video data of the requested channel is accessed. For example, compressed broadcast video data of broadcast video data 410 that corresponds to the requested channel is located and accessed.
At block 810, an intra unit of video data at a time=(T-X) is retrieved. For example, where “X” equals an amount of temporal distance between the time of receiving a channel change input at the client device 124 and the time of receipt of a most recent past intra unit 502 at the headend 104, the intra unit 502 at time=(TX) is retrieved from the broadcast video data 410 for the requested channel. In situations where the channel change request 430 transmission time from the client device 124 to the headend 104 is neither negligible nor otherwise discounted, the time=T may be considered to be the time at which the channel change request 430 is received at the headend 104. Thus, the temporal distance “X” along the broadcast video data stream of the requested channel in such situations is somewhat greater to account for the additional elapsed time of the channel change request 430 transmission, and the consequential receipt of additional non-intra units 504 at the headend 104.
At block 812, video data units that follow the located and/or retrieved intra unit are retrieved at a boost rate. For example, a sufficient number of non-intra broadcast video data units 504 are retrieved from the broadcast video data 410 of server storage 408A by server computer 408B at a rate that exceeds the expected decoding and playout speed thereof at the client device 124. These two retrievals of blocks 810 and 812 may be effectively completed as a single retrieval.
At block 814, the retrieved intra unit of video data is sent to the client device from the headend. For example, the intra unit 502 of broadcast video data is transmitted from the headend 104 over the network 404 to the client device 124, as part of video data 432. At block 816, the following units of video data are sent to the client device from the headend. For example, the non-intra units 504 of broadcast video data that temporally follow the intra unit 502 in the stream 500 for the requested channel are transmitted from the headend 104 to the client device 124 across the network 404, as part of the video data 432. Although the intra unit 502 of video data is decoded and displayed first at the client device 124, the units 502 and 504 of video data may be transmitted to the client device 124 in any suitable order or organizational grouping.
At block 818, the client device receives and displays the intra unit of video data. For example, the client device 124 may receive the intra unit 502 of broadcast video data as part of the video data 432 via the network 404 at a network interface 406. The network interface 406 provides the intra unit 502 of broadcast video data to a video decoder 424 so that the decoding and subsequent display 2 thereof may begin. At block 820, the client device receives and displays the following units of video data. For example, the client device 124 may receive the non-intra units 504 of broadcast video data that follow the intra unit 502 as part of the video data 432 via the network 404 at the network interface 406. The network interface 406 provides the following non-intra units 504 of broadcast video data to a buffer 426 of the video decoder 424 so that the decoding and subsequent display thereof may begin with reference to the intra unit 502 of broadcast video data.
Although systems and methods have been described in language specific to structural and functional features and/or methods, it is to be understood that the invention defined in the appended claims is not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as exemplary forms of implementing the claimed invention.
Number | Name | Date | Kind |
---|---|---|---|
5461415 | Wolf et al. | Oct 1995 | A |
5473362 | Fitzgerald et al. | Dec 1995 | A |
5583868 | Rashid et al. | Dec 1996 | A |
5631694 | Aggarwal et al. | May 1997 | A |
5699362 | Makam | Dec 1997 | A |
5724646 | Ganek et al. | Mar 1998 | A |
5724648 | Shaughnessy et al. | Mar 1998 | A |
5732217 | Emura | Mar 1998 | A |
5822537 | Katseff et al. | Oct 1998 | A |
5884141 | Inoue et al. | Mar 1999 | A |
5892915 | Duso et al. | Apr 1999 | A |
5926230 | Niijima et al. | Jul 1999 | A |
5926659 | Matsui | Jul 1999 | A |
5936659 | Viswanathan et al. | Aug 1999 | A |
5963202 | Polish | Oct 1999 | A |
6047317 | Bisdikian et al. | Apr 2000 | A |
6078594 | Anderson et al. | Jun 2000 | A |
6118498 | Reitmeier | Sep 2000 | A |
6138147 | Weaver et al. | Oct 2000 | A |
6222482 | Gueziec | Apr 2001 | B1 |
6222886 | Yogeshwar | Apr 2001 | B1 |
6266817 | Chaddha | Jul 2001 | B1 |
6330286 | Lyons et al. | Dec 2001 | B1 |
6418473 | St. Maurice et al. | Jul 2002 | B1 |
6430547 | Busche et al. | Aug 2002 | B1 |
6496814 | Busche | Dec 2002 | B1 |
6505106 | Lawrence et al. | Jan 2003 | B1 |
6564262 | Chaddha | May 2003 | B1 |
6580754 | Wan et al. | Jun 2003 | B1 |
6609149 | Bandera et al. | Aug 2003 | B1 |
6615133 | Boies et al. | Sep 2003 | B2 |
6637031 | Chou | Oct 2003 | B1 |
6721952 | Guedalia et al. | Apr 2004 | B1 |
6728965 | Mao | Apr 2004 | B1 |
6738980 | Lin et al. | May 2004 | B2 |
6751129 | Gongwer | Jun 2004 | B1 |
6751626 | Brown et al. | Jun 2004 | B2 |
6766245 | Padmanabhan | Jul 2004 | B2 |
6837031 | Hannen et al. | Jan 2005 | B1 |
6842724 | Lou et al. | Jan 2005 | B1 |
6856759 | Fukuda et al. | Feb 2005 | B1 |
6898246 | Kayayama | May 2005 | B2 |
6985188 | Hurst, Jr. | Jan 2006 | B1 |
6986156 | Rodriguez et al. | Jan 2006 | B1 |
7010801 | Jerding et al. | Mar 2006 | B1 |
7051170 | Guo | May 2006 | B2 |
7106749 | Darshan et al. | Sep 2006 | B1 |
7158531 | Barton | Jan 2007 | B2 |
7167488 | Taylor et al. | Jan 2007 | B2 |
7219145 | Chmaytelli et al. | May 2007 | B2 |
7334044 | Allen | Feb 2008 | B1 |
7382796 | Haberman et al. | Jun 2008 | B2 |
7409456 | Sitaraman | Aug 2008 | B2 |
7430222 | Green et al. | Sep 2008 | B2 |
7443791 | Barrett et al. | Oct 2008 | B2 |
7562375 | Barrett et al. | Jul 2009 | B2 |
7603689 | Baldwin et al. | Oct 2009 | B2 |
8156534 | Barrett et al. | Apr 2012 | B2 |
20020002708 | Arye | Jan 2002 | A1 |
20020024956 | Keller-Tuberg | Feb 2002 | A1 |
20020031144 | Barton | Mar 2002 | A1 |
20020040481 | Okada et al. | Apr 2002 | A1 |
20020107968 | Horn et al. | Aug 2002 | A1 |
20020108119 | Mao et al. | Aug 2002 | A1 |
20020114331 | Cheung et al. | Aug 2002 | A1 |
20020124258 | Fritsch | Sep 2002 | A1 |
20020144276 | Radford et al. | Oct 2002 | A1 |
20020147979 | Corson | Oct 2002 | A1 |
20020147991 | Furlan et al. | Oct 2002 | A1 |
20020170067 | Norstrom et al. | Nov 2002 | A1 |
20030037331 | Lee | Feb 2003 | A1 |
20030060196 | Levinberg | Mar 2003 | A1 |
20030093801 | Lin et al. | May 2003 | A1 |
20030106053 | Sih et al. | Jun 2003 | A1 |
20030158899 | Hughes | Aug 2003 | A1 |
20030159143 | Chan | Aug 2003 | A1 |
20030202594 | Lainema | Oct 2003 | A1 |
20030202775 | Junkersfeld et al. | Oct 2003 | A1 |
20030223430 | Lodha et al. | Dec 2003 | A1 |
20040003399 | Cooper | Jan 2004 | A1 |
20040034863 | Barrett et al. | Feb 2004 | A1 |
20040034864 | Barrett et al. | Feb 2004 | A1 |
20040049793 | Chou | Mar 2004 | A1 |
20040071216 | Richardson et al. | Apr 2004 | A1 |
20040128694 | Bantz et al. | Jul 2004 | A1 |
20040160971 | Krause et al. | Aug 2004 | A1 |
20040160974 | Read et al. | Aug 2004 | A1 |
20040255328 | Baldwin et al. | Dec 2004 | A1 |
20050039214 | Lorenz et al. | Feb 2005 | A1 |
20050071496 | Singal et al. | Mar 2005 | A1 |
20050078680 | Barrett et al. | Apr 2005 | A1 |
20050078757 | Nohrden | Apr 2005 | A1 |
20050080904 | Green | Apr 2005 | A1 |
20050081243 | Barrett et al. | Apr 2005 | A1 |
20050081244 | Barrett et al. | Apr 2005 | A1 |
20050081246 | Barrett et al. | Apr 2005 | A1 |
20050128951 | Chawla et al. | Jun 2005 | A1 |
20050154917 | deCarmo | Jul 2005 | A1 |
20050172314 | Krakora et al. | Aug 2005 | A1 |
20050190781 | Green et al. | Sep 2005 | A1 |
20050240961 | Jerding et al. | Oct 2005 | A1 |
20060117343 | Novak et al. | Jun 2006 | A1 |
20060117358 | Baldwin et al. | Jun 2006 | A1 |
20060117359 | Baldwin et al. | Jun 2006 | A1 |
20060126667 | Smith et al. | Jun 2006 | A1 |
20060251082 | Grossman et al. | Nov 2006 | A1 |
20070113261 | Roman et al. | May 2007 | A1 |
20090161769 | Barrett et al. | Jun 2009 | A1 |
Number | Date | Country |
---|---|---|
2480461 | Oct 2003 | CA |
0633694 | Jan 1995 | EP |
0633694 | Jan 1995 | EP |
1294193 | Sep 2002 | EP |
2001204035 | Jul 2001 | JP |
2001516184 | Sep 2001 | JP |
388182 | Apr 2000 | TW |
WO9806045 | Feb 1998 | WO |
WO 9909741 | Feb 1999 | WO |
WO9909741 | Feb 1999 | WO |
WO-0009741 | Feb 2000 | WO |
WO0103373 | Jan 2001 | WO |
WO0126271 | Apr 2001 | WO |
WO 0156285 | Aug 2001 | WO |
WO02087235 | Oct 2002 | WO |
WO03088646 | Oct 2003 | WO |
WO2004062291 | Jul 2004 | WO |
Entry |
---|
U.S. Appl. No. 10/798,993; Barrett, et al.; Filed Mar. 12, 2004. |
U.S. Appl. No. 10/800.287; Barrett, et al.; Filed Mar. 12, 2004. |
U.S. Appl. No. 10/800,309; Barrett, et al.; Filed Mar. 12, 2004. |
Ding, et al., “Resource-Based Striping: an Efficient Striping Strategy for Video Servers Using Heterogeneous Disk-Subsystems”, Multimedia Tools and Applications, vol. 19, No. 1, Jan. 2003, pp. 29-51. |
Gonzalez, et al., “Load Sharing Based on Popularity in Distributed Video on Demand Systems”, Proceedings 2002 IEEE Int'l. Conf. on Multimedia and Expo, vol. 1, Aug. 2002, pp. 5-8. |
Lee, “Staggered Push-A Linearly Scalable Architecture for Push-Based Parallel Video Servers”, IEEE Transactions on Multimedia, vol. 4, No. 4, Dec. 2002, pp. 423-434. |
Lo, et al., “Deploy Multimedia-on-Demand Services over ADSL Networks”, PCM 2002; Lecture Notes in Computer Science, vol. 2532, Dec. 2002, pp. 295-302. |
Song, et al., “Replica Striping for Multi-Resolution Video Servers”, IDMS/PROMS 2002; Lecture Notes in Computer Science, vol. 2515, No. 2002, pp. 300-312. |
“Digital Headend Solutions; Tune in to Digital TV”, retrieved from the Internet on Nov. 3, 2005, Available at <<http://www.tutsystems.com/digitalheadend/solutions/index.cfm>>, 1 page. |
“Infovalue Experts; Info Value Unveils Industry's First Video Multicasting Solution with Instant Replay”, retrieved from the internet on Nov. 3, 2005. Available at <<http://www.infovalue.com/links/news%20room/press%20releases/1999/Press—%20First—Multicasting—with—instant—Replay.pdf>>, 3 pages. |
“MediaFLO; Introducing FLO Technology”, retrieved from the Internet on Nov. 3, 2005, available at <<http://www.gualcomm.com/mediaflo/news/pdf/flo—whitepaper.pdf>>, pp. 1-8. |
“Optibase MGW 2400”, retrieved from the Internet Nov. 3, 2005; Available at <<http://www.epecomgraphics.com/optibase—mgw2400—features.html>>, 2 pages. |
“QuickTime Streaming your Media in Real Time”, retrieved from the Internet on Nov. 3, 2005, Accessible at <<http://www.apple.com.tw/quicktime/technologies/streaming/, 3 pages. |
BenAbdelkader, et al., “Combining Holistic and Parametric Approaches for Gait Recognition,” Submitted to IEEE Transactions on Pattern Analysis and Machine Intelligence, Dec. 2002. 37 pages. |
BenAbdelkader, el al., “Person Identification Using Automatic Height and Stride Estimation,” IEEE International Conference on Pattern Recognition, Aug. 11-15, 2002, pp. 1-4. |
BenAbdelkader, et al., “Stride and Cadence as a Biometric in Automatic Person Identification and Verification,” 5th International Conference on Automatic Face and Gesture Recognition, May 20, 2002, pp. 1-6. |
BenAbdelkader, et al., “View-invariant Estimation of Height and Stride for Gait Recognition,” , Workshop on Biometric Authentication (BIOMET), in association with ECCV 2002, Jun. 1, 2002, 12 pages. |
BenAbdelkader, et al., “EigenGait: Motion-based Recognition of People Using Image Self-similarity,” Proc. Intl. on Audio and Video-based Person Authentication (AVBPA), 2001, 11 pages. |
BenAbdelkader, et al., “Motion-based Recognition of People in Eigengait Space,” 5th International Conference on Automatic Face and Gesture Recognition, May 20, 2002, pp. 1-6. |
Cutler, et al., “Robust Real-Time Periodic Motion Detection, Analysis, and Applications,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 22, No. 8, Aug. 2000, pp. 781-796. |
Elgammal, et al., “Non-parametric Model for Background Subtraction,” IEEE ICCV99 Frame Rate Workshop, IEEE 7th International Conference on Computer Vision, Kerkyra, Greece, Sep. 1999, pp. 1-17. |
Tsai, R., “An Efficient and Accurate Camera Calibration Technique for 3d Machine Vision,” Proceedings of the Computer Vision and Pattern Recognition, 1986, pp. 364-374. |
Turk, et al., “Face Recognition Using Eigenfaces;” CVPR, 1991. pp. 588-591. |
Haritaoglu, et al., “W4S: A Real-Time System for Detecting and Tracking People in 2 1/2 D,” in European Conference on Computer Vision, 1998, 16 pages. |
U.S. Appl. No. 10/218,675; Barrett, et at.; Filed Aug. 13, 2002. |
U.S. Appl. No. 10/218,674; Barrett, et al.; Filed Aug. 13, 2002. |
U.S. Appl. No. 11/010,200; Smith, et al.; Filed Dec. 10, 2004. |
Gil, et al., “Simulation of a Mobility Prediction Scheme Based on Neuro-Fuzzy Theory in Mobile Computing”,Simulation, Jul. 2000, vol. 75, No. 1, pp. 6-17. |
“Multidimensional Database Technology”, Computer, Dec. 2001, vol. 34, No. 12, pp. 40-46. |
Wolfson, at al., “Modeling Moving Objects for Location Based Services”, Lectures Notes in Computer Science, 2002, vol. 2538, pp. 46-58. |
Zhang, et al., “Data Modeling of Moving Objects with GPS/GIS in Web Environment”, International Conference on Communications, Circuits and Systems and West Sino Exposition Proceedings, 2002. vol. 2 pp. 1581-1585. |
Zhang, et al., “The Cost Model of Moving Objects Communication with GPS”, International Conference on Communications, Circuits and Systems and West Sino Exposition Proceedings, 2002, vol. 2, pp. 1576-1580. |
Armitage, “Support for Multicast over UNI 3.0/3.1 based ATM Networks”, RFC 2022, Standards Track, Nov. 1996; pp. 1-82. |
Halvorsen et al., “Q-L/MRP: A Buffer Muffer Management Mechanism for QoS Support in a Multimedia DBMS”, IEEE 1998, pp. 162-171. |
Hurst, et al., “MPEG Splicing: A New Standard for Television-SMPTE 312M”, SMPTE Journal, Nov. 1998, pp. 978-988. |
Kamiyama et al., “Renegotiated CBR Transmission in Interactive Video-on-Demand System”, IEE 1997, pp. 12-19. |
Lu et al., “Experience in designing a TCP/IP based VOD system over a dedicated network”, IEE 1997, pp. 282-268. |
McKinley, et al., “Group Communication in Multichanel Networks with Staircase Interconnection Topologies”, Computer Communication Review, ACM, Sep. 1989, vol. 19, No. 4, pp. 170-181. |
Petit et al., “Bandwidth Resource Optimization in Video-On-Demand Network Architectures”, IEE 1994, pp. 91-97. |
State, et al.,“Active Network Based Management for QoS Assured Multicast Delivered Media”, Joint 4th IEEE Int'l Conf. on ATM and High Speed Intelligent Internet Symposium, Apr. 2001, pp. 123-127. |
Wee, et al., “Splicing MPEG Video Streams in the Compressed Domain”, IEEE 1997, pp. 224-230. |
Wu, et al., “Scalable Video Coding and Transport over Broad-Band Wireless Networks”, Proceedings of the IEEE, Jan. 2001, vol. 89, No. 1, pp. 6-20. |
Thou, et al., “On-line Scene Change Detection of Multicast (MBone) Video”, Proceedings of the SPIE-The International Society for Optical Engineering, Nov. 1998, vol. 3527, pp. 271-282. |
Murphy, “Overview of MPEG”, retrieved on Mar. 29, 2007, at <<http://web.archive.org/web/20001203031200/http://www.eeng.dcu.ie/˜murphyj/the/the/no . . . >>, pp. 1-3. |
U.S. Appl. No. 10/218,675—Non Final Office Action Dated May 8, 2007. |
Non Final Office Action Received for U.S. Appl. No. 10/218,675, Dated May 8, 2007. 20 Pages. |
Non Final Office Action Received for U.S. Appl. No. 10/218,675, Dated Jan. 24, 2008. 20 Pages. |
Final Office Action Received for U.S. Appl. No. 10/218,675, Dated Sep. 4, 2008. 18 Pages. |
Non Final Office Action Received for U.S. Appl. No. 10/460,949, Dated Sep. 14, 2005. 11 Pages. |
Non Final Office Action Received for U.S. Appl. No. 10/460,949, Dated Sep. 25, 2006. 18 Pages. |
Non Final Office Action Received for U.S. Appl. No. 10/460,949, Dated Apr. 9, 2007. 20 Pages. |
Final Office Action Received for U.S. Appl. No. 10/460,949, Dated Oct. 22, 2007. 19 Pages. |
Non Final Office Action Received for U.S. Appl. No. 10/460,949, Dated Jan. 7, 2008. 24 Pages. |
Non Final Office Action Received for U.S. Appl. No. 10/684,138, Dated Jul. 14, 2008. 26 Pages. |
Non Final Office Action Received for U.S. Appl. No. 10/684,138, Dated Aug. 24, 2007. 32 Pages. |
Final Office Action Received for U.S. Appl. No. 10/684,138, Dated Mar. 6, 2008. 22 Pages. |
Non Final Office Action Received for U.S. Appl. No. 10/789,128, Dated Jul. 27, 2007. 21 Pages. |
Final Office Action Received for U.S. Appl. No. 10/789,128, Dated Feb. 22, 2008. 22 Pages. |
Non Final Office Action Received for U.S. Appl. No. 11/010,200, Dated Apr. 8, 2008. 11 Pages. |
“Foreign Office Action”, Japanese Application No. 2003-293004, (Jun. 25, 2010),6 pages. |
“Foreign Office Action”, Japanese Application No. 2003-293004, (Jul. 17, 2009),4 pages. |
“Foreign Office Action”, Japanese Application No. 2003-293004, (Dec. 21, 2010),4 pages. |
“MediaFlo; Introducing FLO Technology”, retrieved from the internet on Nov. 2, 2005, available at http://www.qualcomm.com/mediaflo/news/pdf/flo—whitepaper.pdf, (May 6, 2005),pp. 1-8. |
“Examination Report”, European Application No. 03016242.4, (Jul. 10, 2008),6 pages. |
“Final Office Action”, U.S. Appl. No. 10/218,675, (Sep. 4, 2008),17 pages. |
“Foreign Office Action”, Chinese Application No. 03154051.1, (Mar. 9, 2007),5 pages. |
“Foreign Office Action”, Chinese Application No. 03154051.1, (Feb. 29, 2008),4 pages. |
“Foreign Office Action”, Chinese Application No. 03154051.1, (Jun. 6, 2008),2 pages. |
“Non Final Office Action”, U.S. Appl. No. 10/218,675, (Jan. 24, 2008),20 pages. |
“Non Final Office Action”, U.S. Appl. No. 10/218,675, (May 8, 2007),19 pages. |
“Non Final Office Action”, U.S. Appl. No. 12/391,268, (Apr. 14, 2011),8 pages. |
“Notice of Allowance”, U.S. Appl. No. 10/218,675, (Nov. 28, 2008),6 pages. |
“Search Report”, European Application No. 03016242.4, (Jan. 29, 2004),4 pages. |
“Supplemental Notice of Allowance”, U.S. Appl. No. 10/218,675, (May 11, 2009),5 pages. |
“Notice of Allowance”, U.S. Appl. No. 12/391,268, (Jan. 24, 2012),7 pages. |
Number | Date | Country | |
---|---|---|---|
20040034863 A1 | Feb 2004 | US |