The present invention relates generally to null packet replacement into a broadcast data stream and more particularly to a method of self-managed queues for null packet replacement to efficiently utilize and maximize available bandwidth.
Currently, in satellite digital video delivery systems, a user or subscriber installs a small parabolic reflector and special electronics at the premises. These systems use the direct broadcast satellite “DBS” spectrum to deliver digital video signals to a user. In these systems, all of the available programming content is transmitted directly to all users from specialized satellites in, for example, a geosynchronous earth orbit. Geosynchronous orbit refers to an orbit in which a satellite orbiting the earth remains in a fixed position relative to a point on the earth. A receiver unit located at the user premises decodes the data stream in order to extract the desired programming.
It is important in digital broadcasting to control the rate at which digital data packets are transmitted. In the absence of real content in a broadcast stream, it is generally practiced to insert a null packet in order to maintain a periodic data rate. It is generally known to calculate an insertion timing and then insert a null packet into the coded data thereby maintaining a desired data rate.
However, a significant disadvantage to this method is that the null packet, which contains useless data, takes up valuable bandwidth. In DBS, the availability of bandwidth is limited and with increased users and content, it is important to efficiently utilize all the bandwidth that is available. The insertion of null packets diminishes the effective bandwidth of the broadcast stream.
Another disadvantage to the null packet insertion method, is that the null packet insertion rate and their duration are non-deterministic. This means that a receiver receiving the data may not have the buffer capacity to handle the amount of data queued. The result is a highly undesirable interruption in service to a subscriber.
There is a need for a method of maximizing the broadcast stream bandwidth without incurring buffer overflow.
The present invention is a method for maximizing bandwidth usage through the use of self-managed queues that have independent programmable policies that interface with a global scheduler. Each queue manages its current state depending on the buffering capability and the class of data queued. An algorithm is used to maintain maximum bandwidth usage. In this respect, the local state maintained by each queue is available to the global scheduler.
It is an object of the present invention to maximize a broadcast stream bandwidth. It is another object of the present invention to ensure buffer overflow is avoided. It is still another object of the present invention to remove null packets and insert background data to make more efficient use of available bandwidth.
It is a further object of the present invention to provide a method of self-managed queues that have independent programmable policies that interface with a global scheduler. It is still a further object of the present invention to use an algorithm to interface the self-managed queues with the global scheduler, thereby maintaining maximum bandwidth usage.
Other objects and advantages of the present invention will become apparent upon reading the following detailed description and appended claims, and upon reference to the accompanying drawings.
For a more complete understanding of this invention, reference should now be had to the embodiments illustrated in greater detail in the accompanying drawings and described below by way of examples of the invention. In the drawings:
The present invention is a system and method for maximizing and maintaining a maximum bandwidth usage of a broadcast stream. Referring to
A data stream 22, having null packets, arrives at the global scheduler 22. The global scheduler 22 communicates with the queues 14 to replace the null packets with background data. The background data is typically not related to video, such as conditional access messages, or data that does not have significant temporal bounds.
Each queue 14 is self-managed. The queues manage their current state depending on a buffering capability 18A, 18B at each client A, B for a receiver 20 and a class of data that is queued. It should be noted that one receiver and two clients are shown for example purposes only. It is possible to have a group of receivers and any number of clients. The local state maintained at each queue 14 is available to the global scheduler 12 to assist in maintaining maximum throughput of substantive broadcast data. The receiver 20 has a fixed amount of memory for storage.
The queue 14 has a policy for sending and receiving data that is typically unique for each queue. In addition, the policy for each queue is separate and distinct from the policy the global scheduler uses in servicing the queues. For example, it may be that a particular queue has a policy that a receiving entity will not overflow a buffer on reception. A buffer may have a maximum buffer size of 50 bytes. The buffer is served once per second. The queue may have a policy that requests to send data at a maximum bandwidth up to 50 bytes to that particular receiver, and then “sleeps” until the next period. It is possible that each queue would have a different policy.
However, the only information the global scheduler 12 is aware of is whether or not the queue is “ready-to-run”. A queue 14 having data would initially be deemed “ready to run”. When the global scheduler 12 removes data from a queue 14 that is representative of the maximum buffering 18 of the client's receiver 20, the queue declares itself “not-ready-to-run”. In such a case, the global scheduler 12 would not consider this particular queue for making its scheduling decisions. Once a timer expires, and the queue 14 has data, the queue once again announces its “ready-to-run” status to the global scheduler 12.
According to the system of the present invention, the global scheduler 12 selects data from the queues 14 to insert in to the data stream instead of null packets. The queues 14 managed by the global scheduler have rules that are private to each queue Q1, Q2, . . . QN. These rules may contain limits such as a maximum data burst rate, a minimum inter-packet time, or a time deadline. These are just a few examples of possible rules. The rules in each queue's policy are determined by the client receiver or the processing function running on the receiver. It is the responsibility of each queue to ensure the receiver does not become overwhelmed, by producing more data than can be received at the receiver.
The queue 14 presents itself as “ready-to-run” or “not-ready-to-run” to the global scheduler 122. The global scheduler has its own set of rules for prioritizing the queues 14. The global scheduler 12 knows the average data rate for each queue 14 and has a priority assigned to each queue 14. Rules for the global scheduler policy 16 are limited to the order of service for the queues. The order may be priority based, it may be round-robin based, or it may be any one of many policies too numerous to mention herein, but known those skilled in the art.
The present invention is advantageous in that the order of service may be changed at the global scheduler 12 without affecting the independent queues 14. For example, if the rules for the global scheduler 12 are strictly priority based, then the scheduler can service Q1 until either no data is remaining or the queue declares itself “not-ready-to-run” because of an internal rule enforcement. The global scheduler 12 has no idea what the internal rules for Q1 may be. The queue may be “not-ready-to-run” for any number of reasons that do not matter to the global scheduler. The global scheduler only needs to know that a queue that is “not-ready-to-run” and therefore, is not available as a source of data.
Referring now to
The status of the data also includes a classification of the data. Classification of the data allows the queue to communicate to the scheduler how to better determine the most efficient use of the queue schedule. For example, in the case of isochronous data, a queue can be programmed to silently overwrite previously queued data. Isochronous data is a delivery model in which the data is only significant for a predetermined period of time. For example, a stock trader depends on real time data to act on a trading policy. Old data is not only useless, but may in fact be detrimental.
In the case where data has lost temporal significance, the queue can be programmed to raise an exception to the scheduler. The scheduler then takes appropriate action, or transfers the exception to another entity with higher control. For example, an exception may be raised so if the data becomes “stale” it will not be delivered to the client receiver. The higher controlling entity such as a software program would be responsible for delivery decisions. In the alternative, a human controller may be responsible for monitoring the broadcast data.
Referring still to
The scheduler makes scheduling decisions based on several factors. These include, but are not limited to the status of the individual queues, the classification of data in the individual queues, and traffic patterns. The greedy algorithm that incorporates these, and other, factors is used to interface the scheduler with the plurality of queues. Because the queues are self-managed, it is not necessary for the global scheduler to consider client receiver overflow. This is handled locally by the queues.
The classification of the data within the queue allows the scheduler to determine how to handle the queue. In this regard the scheduler can handle both critical and non-critical isochronous data. All other classes of data can be considered to use a best effort delivery policy. A “best effort policy” is typically applied when a producer of data has no, or a limited, expectation of when the data will arrive. For example, in the case of a weather ticker equating to a fifteen minute period will not be significantly affected by the loss of data associated with one missed sample period.
According to the present invention, the queues maintain localized knowledge necessary to effectively manage data to a client receiver. The scheduler analyzes the current condition presented by each queue and attempts to optimize its use among the queues based upon its own set of rules for servicing the queues. The global scheduler and the queues are partitioned, thereby allowing the internal rules of a queue to be modified without affecting the design of the scheduler. Conversely, the rules for the scheduler may be modified without affecting the internal rules of each independent queue. Functionality and complexity are completely separated by the system and method of the present invention.
Each self-managed queue has its own data management policy. Each data management policy has a set of predefined rules, unique to that particular queue. The queue applies 204 the set of predefined rules to determine whether its status is “ready-to-run” or “not-ready-to-run”, and then announces 206 its status to the global scheduler. When all of the rules in the set of rules have been met, the queue announces a “ready-to-run” status to the global scheduler. If less than all of the rules have been met, the queue announces a “not-ready-to-run” status.
The global scheduler has its own order of service policy that has rules it applies 208 to determine the order of service for each of the queues having a “ready-to-run” status. The global scheduler need only concern itself with the queues having a “ready-to-run” status. In this regard, the global scheduler can obtain data 210 from the queue and use it to replace 212 the null packets in the broadcast data stream.
It is possible for the data to be assigned 214 a classification. This makes it possible for the global scheduler to determine whether or not to use the data from the “ready-to-run” queue. For example, as discussed above, stale isochronous data, or outdated data will not be used, and the scheduler may identify another “ready-to-run” queue for service.
The invention covers all alternatives, modifications, and equivalents, as may be included within the spirit and scope of the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
3627914 | Davies | Dec 1971 | A |
3843942 | Pierret et al. | Oct 1974 | A |
5337041 | Friedman | Aug 1994 | A |
5363147 | Joseph et al. | Nov 1994 | A |
5404315 | Nakano et al. | Apr 1995 | A |
5424770 | Schmelzer et al. | Jun 1995 | A |
5448568 | Delpuch et al. | Sep 1995 | A |
5461619 | Citta et al. | Oct 1995 | A |
5463620 | Sriram | Oct 1995 | A |
5506844 | Rao | Apr 1996 | A |
5532753 | Buchner et al. | Jul 1996 | A |
5579404 | Fielder et al. | Nov 1996 | A |
5625743 | Fiocca | Apr 1997 | A |
5650825 | Naimpally et al. | Jul 1997 | A |
5657454 | Benbassat et al. | Aug 1997 | A |
5666430 | Rzeszewski | Sep 1997 | A |
5729556 | Benbassat et al. | Mar 1998 | A |
5751723 | Vanden Heuvel et al. | May 1998 | A |
5778077 | Davidson | Jul 1998 | A |
5802068 | Kudo | Sep 1998 | A |
5822018 | Farmer | Oct 1998 | A |
5831681 | Takahashi et al. | Nov 1998 | A |
5854658 | Uz et al. | Dec 1998 | A |
5864557 | Lyons | Jan 1999 | A |
5877821 | Newlin et al. | Mar 1999 | A |
5898675 | Nahumi | Apr 1999 | A |
5912890 | Park | Jun 1999 | A |
5966120 | Arazi et al. | Oct 1999 | A |
5987031 | Miller et al. | Nov 1999 | A |
5991812 | Srinivasan | Nov 1999 | A |
6064676 | Slattery et al. | May 2000 | A |
6137834 | Wine et al. | Oct 2000 | A |
6169584 | Glaab et al. | Jan 2001 | B1 |
6169807 | Sansur | Jan 2001 | B1 |
6169973 | Tsutsui et al. | Jan 2001 | B1 |
6188439 | Kim | Feb 2001 | B1 |
6195438 | Yumoto et al. | Feb 2001 | B1 |
6208666 | Lawrence et al. | Mar 2001 | B1 |
6252848 | Skirmont | Jun 2001 | B1 |
6259695 | Ofek | Jul 2001 | B1 |
6298089 | Gazit | Oct 2001 | B1 |
6369855 | Chauvel et al. | Apr 2002 | B1 |
6389019 | Fan et al. | May 2002 | B1 |
6430233 | Dillon et al. | Aug 2002 | B1 |
6687247 | Wilford et al. | Feb 2004 | B1 |
6765867 | Shanley et al. | Jul 2004 | B2 |
6931370 | McDowell | Aug 2005 | B1 |
7035278 | Bertram et al. | Apr 2006 | B2 |
20010016048 | Rapeli | Aug 2001 | A1 |
20010047267 | Abiko et al. | Nov 2001 | A1 |
20020004718 | Hasegawa et al. | Jan 2002 | A1 |
20020085584 | Itawaki et al. | Jul 2002 | A1 |
20020146023 | Myers | Oct 2002 | A1 |
20020169599 | Suzuki | Nov 2002 | A1 |
20020173864 | Smith | Nov 2002 | A1 |
20040199933 | Ficco | Oct 2004 | A1 |
Number | Date | Country |
---|---|---|
EP 1 150 446 | Oct 2001 | EP |
2341745 | Mar 2002 | GB |
10 284960 | Oct 1998 | JP |
2001-111969 | Apr 2001 | JP |
2001 169248 | Jun 2001 | JP |
WO9953612 | Oct 1999 | WO |
WO 0130086 | Apr 2001 | WO |