Priority service process for serving data units on a network

Information

  • Patent Application
  • 20030067932
  • Publication Number
    20030067932
  • Date Filed
    December 13, 2001
    22 years ago
  • Date Published
    April 10, 2003
    21 years ago
Abstract
Serving data units on a network includes queuing data units from a first application in a first buffer, queuing data units from a second application in a second buffer, moving data units from the second buffer to the first buffer following a predetermined delay, and serving data units from the first buffer.
Description


TECHNICAL FIELD

[0002] This invention relates to serving data units on a network.



BACKGROUND

[0003] Asynchronous transfer mode (ATM) has been selected as the Committee on Civilian Industrial Technology (CCITT) standard for switching and multiplexing on Broadband-Integrated Service Digital Networks (B-ISDN). B-ISDN supports services with diversified traffic characteristics and Quality of Service (QoS) requirements, such as data transfer, telephone/videophone, high definition television (HDTV), multimedia conferencing, medical diagnosis and real-time control.


[0004] To efficiently utilize network resources through multiplexing, while providing the required QoS to supported multiple applications, the service to a specific application should be dependent on the QoS requirements of that application. QoS is typically described in terms of some measure of the delay or loss that data units of an application suffer over a transmission path from source to destination. Since the universal data unit of an ATM network is the (fixed size) cell, QoS requirements for ATM are usually described in terms of some metric of the cell delay and cell loss.


[0005] One objective of a priority service scheme is to deliver service that is as close as possible to the target QoS specified by the associated cell delay/loss metric. A priority service scheme can be defined in terms of a cell serving (i.e., transmitting) policy specifying: (a) which of the arriving cells are admitted to network buffer(s) and/or (b) which of the admitted cells is served next from those buffer(s). The former type of priority service scheme is typically referred to as a “space-priority” scheme and impacts on the delivered cell loss metric. The latter type is typically referred to as a “time-priority” (or priority-scheduling) scheme and impacts on the delivered cell delay metric.


[0006] In the absence of priority service, the network load may be set at a potentially very low level in order to provide the most stringent QoS to all applications on the network. Considering that QoS requirements can differ substantially for different applications, a non-priority service scheme may result in severe underutilization of the network's resources. For instance, cell loss probability requirements can range from 102 to 1010 cells and end-to-end cell delay requirements for real-time applications can range from below 25 milliseconds (ms) to 1000 ms.



SUMMARY

[0007] In general, in one aspect, the invention is directed to a priority service scheme for serving data units on a network. This aspect includes queuing data units from a first application in a first buffer, queuing data units from a second application in a second buffer, moving data units from the second buffer to the first buffer following a predetermined delay, and serving data units from the first buffer. This aspect of the invention may include one or more of the following features.


[0008] The data units from the first application may have a higher priority for transmission on the network than the data units from the second application. The data units from the first application and the data units from the second application may be served from the first buffer on a first-come-first-served basis. Data units from the first application that exceed a first time delay may be discarded and data units from the second application that exceed a second time delay may be discarded. The second time delay may exceed the predetermined time delay.


[0009] The data units from the second buffer may be moved to an end of the first buffer after data units from the first application. A time to move the data units from the second buffer to the first buffer may be determined. A circular buffer, a pointer and a timer may be used to determine the time to move the data units from the second buffer to the first buffer. Data units may be served from the second buffer when the first buffer is empty. The data units may include Asynchronous Transfer Mode (ATM) cells and the network may be an ATM network.


[0010] This summary has been provided so that the nature of the invention can be understood quickly. A detailed description of illustrative embodiments of the invention is set forth below.







DESCRIPTION OF THE DRAWINGS

[0011]
FIG. 1 is a block diagram of buffers for serving data units.


[0012]
FIG. 2 is a block diagram of part of a controller for determining a timing at which the data units may be moved between the buffers of FIG. 1.


[0013]
FIG. 3 is a graph showing axes used in a performance analysis of a method for serving the data units from the buffers.


[0014]
FIG. 4 is a graph showing a renewal cycle for cells on the axes of FIG. 3.


[0015]
FIG. 5 is a view of a computer on which the method for serving the data units from the buffers may be implemented.







DETAILED DESCRIPTION

[0016] One factor in ATM network design is providing diversified QoS to applications with distinct characteristics. This is of particular importance in real-time applications, such as streaming audio and video. Buffer management schemes can play a role in providing the necessary diversification through an ATM cell admission and service policy. A flexible priority service policy for two applications (classes) with strict—and in general distinct—deadlines and different deadline violation rates is described herein. The flexible service policy described herein utilizes a buffer management scheme to provide the requisite QoS to the different applications.


[0017] Consider two different classes of traffic (from, e.g., two different computing applications) sharing a transmission link between a transmitter (not shown) and a receiver (not shown). The link is slotted and capable of serving one data unit (in the case of ATM, a cell) per time slot. Let H (for high priority) and L (for low priority) denote the two classes of traffic and let superscript XY denote a quantity associated with traffic class XY. A process 10 for serving ATM cells from a device, such as a router or general-purpose computer, to an ATM network, is as follows:


[0018] (1) H-cells (i.e., cells of an application “H”) join the “Head of the Line” (HoL) service class upon arrival. HoL cells are served according to the HoL priority policy; i.e., no service is provided to other cells unless no HoL cell is present. H-cells that experience a delay of more than TH slots are discarded.


[0019] (2) L-cells (i.e., cells of an application “L”) are served according to D-HoL priority. That is, L-cells join the HoL service class (as fresh arrivals) only after they have waited for D (D≧1) time slots. Since the service policy is assumed to be work-conserving, L-cells may be served before they join the HoL class provided that no HoL class cells are present. L-cells that experience a delay of more than TL slots are discarded. It is assumed that TL≧D.


[0020] (3) Within each service class (HoL or waiting-to-join L-cells), cells are served according to the First-Come First Served (FCFS) policy. In the FCFS policy, the first cell to arrive at the buffer is the first cell to be output from the buffer.


[0021] The cell serving policy of process 10 may be implemented by considering time-stamps associated with each cell arrival. However, such an approach may not be realistic for a high-speed, low-management complexity switching system. A simpler implementation of the cell serving policy of process 10 is shown in FIG. 1.


[0022] Referring to FIG. 1, as H-cells arrive, the H-cells are immediately queued (i.e., stored) in H-buffer 12. If the occupancy of H-buffer 12 exceeds a predetermined threshold (in this embodiment, a threshold that corresponds to TH), newly-arriving H-cells are discarded (e.g., cells for real-time video that arrive past their preset “playout” time may be discarded). Since H-buffer 12 is served under the HoL priority policy, the discarded H-cells are precisely the ones whose deadline TH would be violated. Accordingly, losing those cells will have little impact on overall QoS. L-cells are queued in L-buffer 14 as they arrive. In this embodiment, L-buffer 14 is served only when H-buffer 12 is empty. Otherwise, cells in L-buffer 14 that experience a delay D are moved to the end of the queue of H-buffer 12, provided that the occupancy of H-buffer 12 does not exceed TL−D, in which case these L-cells are discarded. The L-cells are served from the H-buffer in turn. Again, since H-buffer 12 is served under the HoL priority policy, the discarded L-cells are the ones whose deadline TL would be violated. Accordingly, losing those cells will have little impact on overall QoS.


[0023] The arrangement shown in FIG. 1 implements (deadline-violating) cell discarding through a simple buffer threshold policy, avoiding a more complex time-stamping approach. The capacity of H-buffer 12 and L-buffer 14 are equal to max(TH, TL−D) and min (TL, DNmax), respectively; where Nmax is the maximum number of L-cell arrivals per slot. The threshold “d” in FIG. 1 is equal to min (TH, TL−D). If min(TH, TL−D)=TH, then any H-cells more than d cells in H-buffer 12 are discarded.


[0024] There are several ways of identifying the time to move L-cells from L-buffer 14 to H-buffer 12. For example, time-stamping may be used (i.e., examining cell time-stamps). Time-stamp-based sorting in every slot presents a level of complexity that may not be tolerable in a high-speed networking environment. For this reason, alternatives to time-stamp-based implementation approaches may be used. For example, a list of cell arrival times and a clock may be used to control cell movement.


[0025] In this embodiment, a circular buffer (registers) that records the number of cell arrivals per slot is used. Referring to FIGS. 1 and 2, the L-buffer controller may be a microprocessor, microcontroller, or the like that includes (or uses) a pool of D registers 16, a timer T 18, and a pointer P 20. The timer counts from 0 to D−1, increasing its content by one (1) at the count of every slot. After D−1, the timer is reset at the next slot and then continues counting. The current content of the timer indicates the register where the number of L-cell arrivals during the current slot are registered.


[0026] The pool of registers 16 may be viewed as a circular structure and the timer T may be viewed as a rotating pointer pointing to a register to be used at a current time (FIG. 2) (although this is not necessarily its actual structure). The register visited by timer T contains the number of L-cells that have experienced a delay D in L-buffer 14. These cells, which are at the head of L-buffer 14, are moved to H-buffer 12 and the number of new L-cell arrivals over the current slot are registered in this register.


[0027] The timer T identifies the time that L-cells are moved to H-buffer 12. A mechanism may also be used to determine changes in the content of the registers due to service provided to L-cells that are in L-buffer 14. The pointer P is used for this purpose. This pointer P points to the (non-zero-content) register containing the number of L-cells that are the oldest in L-buffer 14. When service is provided to L-buffer 14—i.e., when the H-buffer 12 is empty—the content of the register pointed to by pointer P is decreased by one. The pointer P moves from a current register to a next non-zero-content register if the content of the current register becomes zero. This occurs if the content of the current register is one and service is provided to an L-cell or if the timer T visits this register, and thus the corresponding L-cells are moved to H-buffer 12. If there is currently no L-cell in L-buffer 14, pointer P equals timer T.


[0028] The following evaluates the diversified QoS provided to the two classes of traffic, L and H, by the cell serving policy of process 10. Here, QoS is defined in terms of the induced cell loss probability, noting that cell loss and cell deadline violation are identical events under process 10. The diversity in the QoS requirements for the two applications served under process 10 is represented by differences in cell delay deadlines and cell loss probabilities. By controlling the delay D before the L-cells qualify for service under the HoL priority, it is expected that the induced cell loss probability will be affected substantially. The effectiveness of the D parameter is demonstrated through numerical results. The approach can be modified to yield other QoS measures such as the average cell delay and the tail of the cell delay distribution.


[0029]
FIG. 3 shows three axes, which are used in the performance analysis of process 10. The L-axis 22 is used to describe L-cell arrivals at the time they occur. The H-axis 24 is used to describe H-cell arrivals at the time they occur. The system axis 26 is used to mark the current time. In this example, cell arrival and service completions are assumed to occur at the slot boundaries.


[0030] The H-cell and L-cell arrival processes are assumed to be independent and governed by geometrically distributed (per slot) batches with parameters qH and qL, respectively. The probability that the batch sizes NH and NL are equal to n is given by




P


H
(NH=n)=(qH)n(1−qH), PL(NL=n)=(qL)n(1−qL), n=0, 1, 2, . . .   (1)



[0031] and the mean H-cell (L-cell) arrival rate is given by
1λH=qH1-qH(λL=qL1-qL)(2)


[0032] It is noted that the assumptions concerning the arrival processes considered above are not critical, since they can be changed in any number of ways. For instance, two-state Markov arrival processes may be considered and the resulting system may be analyzed by applying the same approach (but requiring increased computations). The subject cell serving policy is also applicable to arrival processes that are independent and identically distributed with arbitrarily distributed batch sizes. In this case, the maximum batch size affects the magnitude of the numerical complexity.


[0033] The following analysis is based on renewal theory. Let n, n∈N, denote the current time, where N denotes the set of natural numbers. At this time, n, the server is ready to begin the service of the next cell. Consider the following definitions
1Vn:A random variable describing the current length (in slots) of theunexamined interval at the H-axis. That is, all H-cell arrivalsbefore time n − Vn have been served, since no H-cell arrival aftertime n − Vn + 1 has been considered for service by time n.Un:A random variable such that Un + Vn describes the current lengthof the unexamined intervals on the L-axis.


[0034] It is shown that {Un, Vn}n∈N is a Markov process. Let {sk}k∈N denote a sequence of time instants (slot boundaries) at which the system is empty; {sk}k∈N is a renewal sequence. Consider the following definitions.
2Yk:A random variable describing the length of the kth renewalcycle; Y will denote the generic random variable (associatedwith a renewal cycle).YkH/YkL:A random variable describing the number of H-cells/L-cellstransmitted over the kth renewal cycle; YH/YL willdenote the generic random variable.LkH/LkL:A random variable describing the number of H-cells/L-cellslost over the kth renewal cycle; LH/LL will denote the genericrandom variable.


[0035] It can be shown that




Y


k


=Y


k


H


+Y


k


L
+1, for Yk≧1  (3)



[0036] Referring to FIG. 4, a renewal cycle is shown. Cells marked by “x” are the ones that are lost (due to violation of the associated deadline). Transmitted cells are shown on the system axis. In this example, TH=3, TL=4, and D=2. The renewal cycle begins at time t0, at which time no cell is present. The first slot of the renewal cycle is always idle (no transmission occurs). At time slot t4, one L-cell is served and the first L-cell is discarded due to the expiration of its deadline (t4−t1=TL). In fact, at t3, the first two L-cells switch to H-buffer 12, but only one is served before its deadline. At t5, the L-cells that arrived at t3 switch to H-buffer (t5−t3=D). The H-cell that arrived at t5 is served before these L-cells. At t9, the L-cell is served since no H-cell is present. At t10, no cell is present and the renewal cycle ends. In this example, Yk=10, YkH=6, YkL=3, LkH=1, and LkL=3.


[0037] The evolution of the process {Un, Vn} can be derived for the example shown in FIG. 4. Notice that when Un>D, Un points to the oldest unexamined slot that is to be served under the HoL priority. The associated L-cells have switched priority and are the oldest cells in the system. If Un<D, Vn points to the oldest unexamined slot that is to be served under the HoL priority. If Un=D, the oldest unexamined intervals in both axes have the same priority and the selection is made according to a probabilistic. One possible evolution of {Un, Vn} corresponding to FIG. 4 is shown below:


[0038] t1: (0, 1)


[0039] t2: (0, 2)→(1, 1)


[0040] t3: (1, 2)


[0041] t4: (1, 3)→(2, 2)


[0042] t5: (1, 3)→(2, 2)→(1, 2)→(2, 1)


[0043] t6: (2, 2)


[0044] t7: (1, 3)→(2, 2)→(1, 2)


[0045] t8: (1, 3)


[0046] t9: (1, 3)→(2, 2)→(1, 2)→(2, 1)→(1, 1)→(2, 0)→(1, 0)


[0047] t10: (1, 1)→(2, 0)→(1, 0)→(0, 0)


[0048] 4.2. Derivation of System Equations


[0049] The following definitions will be used in the analysis:
3YH (i, j):A random variable describing the number of H-cells trans-mitted over the interval between a time slot at which {Un, Vn}is in state (i, j) and the end of the renewal cycle which con-tains this slot.YL (i, j):A random variable defined as YH (i, j) for L-cells.IH (IL):An indicator function assuming the value 1 if an H-cell (L-cell) is present at the slot of H-axis (L-axis) under currentexamination (as pointed to by Vn or Un + Vn); it assumesthe value 0 otherwise.ÎH L):An indicator function defined as ÎH = 1 − IH L = 1 − IL).[i, j]*:It is a function which determines the next state of {Un, Vn}taking into consideration possible violation of TL:


[0050]

2




[

i
,
j

]

*

=

{





(

i
,
j

)

,






i
+
j



T
L


,






(



T
L

-
j

,
j

)




otherwise
.











4














m:
It denotes the lowest possible value of random variable



Vn; m = 0 if D ≠ 0 and m = −1 if D = 0.


I{H} (I{L}):
An indicator function associated with decisions regarding



the slot to be examined next when Un = D.I{H} = 1



(I{L} = 1) if the oldest unexamined slot in H-axis (L-axis) is



considered next, and I{H} = 0(I{L} = 0) otherwise. This



is a design parameter which can impact on the induced cell



losses. In Appendix A, the expected values of these func-



tions (μH = P {I{H} = 1}, μL = P {I{L} =



1}) are derived under the assumption that all cells (from



both classes) arrived over the slots to be examined when



Un = D are equally likely to be selected. {overscore (X)}: Denotes



E {X}, where E {·} is the expectation operator.










[0051] In the sequel, recursive equations are derived for the calculation of {overscore (Y)}H and {overscore (Y)}L. Then, similar equations are derived for the calculation of {overscore (L)}H and {overscore (L)}L. As it will be shown later, these quantities will yield the cell loss probabilities. Finally, the similar approach for the calculation of the average cell delay and cell delay tail probabilities is outlined at the end of this section.


[0052] It is easy to observe that no cell is transmitted in the first slot and thus, process {Un, Vn} actually starts from state (0, 1). Thus,




Y


H


=Y


H
(0, 1), YL=YL(0, 1)  (4)



[0053] A careful consideration of the evolution of the recursions presented below shows that when process {Un, Vn} reaches the state (0, 0), the renewal cycle ends. For this reason,




Y


H
(0, 0)=0, YL(0, 0)=0  (5)



[0054] to terminate the current cycle. Notice also that Un can exceed D+1 only if Vn=TH.


[0055] Case A. TL≧TH3CaseA.1.TH>j>0,min(TL-j,D+1)im.1.i<D:YH(i,j)=IH+YH([i+I^H,j-2I^H+1]*),YL(i,j)=YL([i+I^H,j-2I^H+1]*).(6)2.i=D+1:YH(D+1,j)=YH([i-I^L,j-I^L+1]*),YL(D+1,j)=IL+YL([i-I^L,j-I^L+1]*).(7)3.i=D:YH(D,j)={IH+YH([i+I^H,j-2I^H+1]*)}I{H}+YH([i-I^L,j-I^L+1]*)I{L}.YH(D,j)=YL([i+I^H,j-2I^H+1]*)I{H}+{IL+YL([i-I^L,j-I^L+1]*)}I{L}.CaseA.2.j=TH,TL-THim.(8)1.i<D:YH(i,TH)=IH+YH([i+1,TH-I^H]*),YL(i,TH)=YL([i+1,TH-I^H]*).(9)2.iD+1:YH(i,TH)=YH([i-2I^L+1,TH]*),YL(i,TH)=IL+YL([i-2I^L+1,TH]*).(10)3.i=D:YH(D,TH)={IH+YH([i+1,TH-I^H]*)}I{H}+YH([i-2I^L+1,TH]*)I{L},YL(D,TH)=YL([i+1,TH-I^H]*)I{H}+{IL+YL([i-2I^L+1,TH]*)}I{L}.CaseA.3.j=0,min(TL,D+1)i1.(11)YH(i,0)=YH([i-I^L,1-I^L]*),YH(i,0)=IL+YL([i-I^L,1-I^L]*),(12)


[0056] Case B. TL<TH


[0057] The equations under this case are derived similarly and are presented in Appendix B. By applying the expectation operator to the above equations, the following systems of linear equations are obtained, details are presented in Appendix C:
4Y_H(i,j)=aH(i,j)+(i,j)R0bH(i,j,i,j)Y_H(i,j),Y_L(i,j)=aL(i,j)+(i,j)R0bL(i,j,i,j)Y_H(i,j),(13)


[0058] where R0={(i, j): m≦i≦TL, 0≦j≦TH}. It should be noted that these systems of linear equations are extremely sparse: only two to four coefficients are not zero per equation. Thus, it can be solved efficiently by using an iterative approach. The computation complexity is of the order of DTH. For TH and D<100, it takes less than a couple of hours to solve these equations in a SUN SPARC20 workstation. From the solution of these equations, {overscore (Y)}H and {overscore (Y)}L are obtained from (see (4))




{overscore (Y)}


H


={overscore (Y)}


H
(0, 1), {overscore (Y)}L={overscore (Y)}L(0, 1)  (14)



[0059] The expected value of the number of H-cells (L-cells) lost over a renewal cycle, {overscore (L)}H({overscore (L)}L), is derived by following a similar approach. The following quantities need to be defined first.
5LH (i, j) (LL (i, j)):A random variable describing the number of H-cells(L-cells) discarded over the interval between a timeslot at which {Un, Vn} is in state (i, j) and the end ofthe renewal cycle which contains this slot.S{i, j}:An indicator function assuming the value 1 if there is apossibility to discard an L-cell as a result of the serviceto be provided in the current slot (due to resultingviolation of its deadline TL):


[0060]

5



S

{

i
,
j

}


=

{



1






if





i

+
j

=

T
L


,





0



otherwise
.












[0061] The equation for the derivation of LnH and LnL are similar to those for the derivation of YnH and YnL and are given below. Notice again that




L


H


=L


H
(0, 1), LL=LL(0, 1)  (15)



[0062] and




L


H
(0, 0)=0, LL(0, 0)=0  (16)



[0063] Two cases need to be considered: TL≧TH and TL<TH.


[0064] Case A. TL≧TH6CaseA.1.TH>j>0,min(TL-j,D+1)im.1.i<D:LH(i,j)=LH+YH([i+I^H,j-2I^H+1]*),LL(i,j)=IHS{i,j}NL+LL([i+I^H,j-2I^H+1]*).(17)2.i=D+1:LH(D+1,j)=LH([i-I^L,j-I^L+1]*),LL(D+1,j)=ILS{D,j}NL+LL([i-I^L,j-I^L+1]*).(18)3.i=D:LH(D,j)=LH([i+I^H,j-2I^H+1]*)I{H}+LH([i-I^L,j-I^L+1]*)I{L},LL(D,j)={IHS{D,j}NL+LL([i+I^H,j-2I^H+1]*)}I{H}+{ILS{D,j}NL+LL([i-I^L,j-I^L+1]*)}I{L}.CaseA.2.j=TH,TL-THim.(19)1.i<D:LH(i,TH)=IHNH+LH([i+1,TH-I^H]*),LL(i,TH)=IHS{i,TH}NL+LL([i+1,TH-I^H]*).(20)2.iD+1:LH(i,TH)=ILNH+LH([i-2I^L+1,TH]*),LL(i,TH)=ILS{i,TH}NL+YL([i-2I^L+1,TH]*).(21)3.i=D:LH(D,TH)={IHNH+LH([i+1,TH-I^H]*)}I{H}+{ILNH+LH([i-2I^L+1,TH]*)}I{L},LL(D,TH)={IHS{D,TH}NL+LL([i+1,TH-I^H]*)}I{H}+{ILS{D,TH}NL+LL([i-2I^L+1,TH]*)}I{L}.CaseA.3.j=0,min(TL,D+1)i1.(22)LH(i,0)=LH([i-I^L,1-I^L]*),LL(i,0)=ILS{i,0}NL+LL([i-I^L,1-I^L]*),(23)


[0065] Case B. TL<TH


[0066] The equations under this case are derived similarly (see also Case B in the derivation of YH and YL) and are not presented due to space consideration.


[0067] By taking expectation operation to both sides of the equations, the following systems of linear equations are obtained:
7L_H(i,j)=a′H(i,j)+(i,j)R0bH(i,j,i,j)L_H(i,j),L_L(i,j)=a′L(i,j)+(i,j)R0bL(i,j,i,j)L_H(i,j),(24)


[0068] where R0 and coefficients bH(i, j, i′, j′) and bL(i, j, i′, j′) are identical to those in system (13), and constants a′H(i, j) and a′L(i, j) are derived as the corresponding ones in the system in (13). Finally,




{overscore (L)}


H


={overscore (L)}


H
(0, 1), {overscore (L)}L={overscore (L)}L(0, 1)  (25)



[0069] The cell loss probabilities PlossH for H-cells and PlossL for L-cells are obtained from the following expressions:
8PlossH=L_HY_H+L_H,PlossL=L_LY_L+L_L.(26)


[0070] Notice that {overscore (Y)}H+{overscore (L)}H is the average number of H-cells over a renewal cycle which is also given by λH{overscore (Y)}. Similarly, {overscore (Y)}L+{overscore (L)}L is the average number of L-cells over a renewal cycle which is also given by λL{overscore (Y)}.


[0071] By invoking renewal theory, the rate of service provided to H-cell and L-cell streams—denoted by λsH and λsL respectively—is given by
9λsH=Y_HY_H+Y_L+1,λsL=Y_LY_H+Y_L+1.(27)


[0072] Alternatively, the cell loss probabilities—given by (26)—can be obtained as
10PlossH=λH-λsHλH,PlossL=λL-λsLλL.(28)


[0073] Notice that computation of PlossH and PlossL from (27) and (28) does not require computation of LH and LL. It should be noted, however, that Eq. (28) can potentially introduce significant numerical error, especially if the cell loss rates are very low. For this reason, results have been obtained by invoking Eq. (26) in this paper.


[0074] As stated earlier, other measures of the QoS can be derived by following the above approach. The calculation of the average delay of the successfully transmitted cells and the tail of the delay probability distribution are outlined below. Equations similar to those presented for LH(i, j) and LL(i, j) can be derived, where the associated quantities of interest—instead of discarded H-cells in LH(i, j) and L-cells in LL(i, j)—are properly defined below.
6CH (i, j) (CL (i, j)):A random variable describing the cumulative delayof successfully transmitted H-cells (L-cells) over theinterval between a time slot at which {Un, Vn} is instate (i, j) and the end of the renewal cycle whichcontains this slot.BhH (i, j) (BlL (i, j)):A random variable describing the number of H-cells(L-cells) which have experienced a delay less thanor equal to h (l) over the interval between a timeslot at which {Un, Vn} is in state (i, j) and the endof the renewal cycle which contains this slot.


[0075] Then,




C


H


=C


H
(0, 1), CL=CL(0, 1), BhH=BhH(0, 1), BlL=BlL(0, 1)  (29)



[0076] The average delays for H-cells and L-cells are given by
11D_H=C_HY_H,D_L=C_LY_L.(30)


[0077] The tail of the delay probability distribution is given by
12PH(DH>h)=1-B_hHY_H+L_H=1-B_hHλHY_,PL(DL>l)=1-BlLY_L+L_L=1-BlLλLY_.(31)


[0078] To derive the quantities in (31), similar equations to those associated with LH(i, j) or LL(i, j) can be derived by replacing the first of the two right-hand side terms in those equations—counting discarded cells—by functions that count the total delay of the currently transmitted cell (in determining CH(i, j) or CL(i, j)) or count the number of cells transmitted over the current slot which experienced a delay of less than or equal to h or l slots (in determining BhH(i, j) or BlL(i, j)). These functions—denoted by F{i, j}H(F{i, j}L) and (GhH(i, j) (GlL(i, j)), respectively—are given by the following:
13F{i,j}H={jifanH-cellisserved,0otherwiseF{i,j}L={i+jifanL-cellisserved0otherwiseGhH(i,j)={1ifanH-cellisservedandjh,0otherwiseGlL(i,j)={1ifanL-cellisservedwithi+jl,0otherwise


[0079]
FIG. 5 shows a server (computer) 30 on which process 10 may be executed. Server 30 includes a processor 32, a memory 34, and a storage medium 36 (e.g., a hard disk) (see view 38). Storage medium 36 stores machine-executable instructions 40, which are executed by processor 32 out of memory 34 to perform process 10 on incoming data units, such as ATM cells, to serve them to a network.


[0080] Process 10, however, is not limited to use with the hardware and software of FIG. 5; it may find applicability in any computing or processing environment. Process 10 may be implemented in hardware, software, or a combination of the two. Process 10 may be implemented in one or more computer programs executing on programmable computers or other machines that each includes a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements).


[0081] Each such program may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. However, the programs can be implemented in assembly or machine language. The language may be a compiled or an interpreted language.


[0082] Each computer program may be stored on an article of manufacture, such as a storage medium or device (e.g., CD-ROM (compact disc read-only memory), hard disk, or magnetic diskette), that is readable by a general or special purpose programmable machine for configuring and operating the machine when the storage medium or device is read by the machine to perform process 10. Process 10 may also be implemented as a machine-readable storage medium, configured with a computer program, where, upon execution, instructions in the program cause the machine to operate in accordance with process 10.


[0083] The invention is not limited to the specific embodiments described herein. For example, the invention can be used with multiple applications, not just the two L and H applications described above. The invention is not limited to use with ATM cells or to use with ATM networks. Any type of data unit or data packet may be used. The invention is not limited to use with the hardware and software described herein or to use in a B-ISDN context, but rather may be applied to any type of network. The invention is particularly applicable to real-time applications, such as voice and video interactive communications; however, it may be used with any type of computer application.


[0084] Other embodiments not specifically described herein are also within the scope of the following claims.


Claims
  • 1. A method of serving data units on a network, comprising: queuing data units from a first application in a first buffer; queuing data units from a second application in a second buffer; moving data units from the second buffer to the first buffer following a predetermined delay; and serving data units from the first buffer.
  • 2. The method of claim 1, wherein the data units from the first application have a higher priority for transmission on the network than the data units from the second application.
  • 3. The method of claim 1, wherein the data units from the first application and the data units from the second application are served from the first buffer on a first-come-first-served basis.
  • 4. The method of claim 1, further comprising: discarding data units from the first application that exceed a first time delay; and discarding data units from the second application that exceed a second time delay.
  • 5. The method of claim 4, wherein the second time delay exceeds the predetermined time delay.
  • 6. The method of claim 1, wherein the data units from the second buffer are moved to an end of the first buffer after data units from the first application.
  • 7. The method of claim 1, further comprising: determining a time to move the data units from the second buffer to the first buffer.
  • 8. The method of claim 7, wherein a circular buffer, a pointer and a timer are used to determine the time to move the data units from the second buffer to the first buffer.
  • 9. The method of claim 1, further comprising: serving data units from the second buffer when the first buffer is empty.
  • 10. The method of claim 1, wherein the data units comprise Asynchronous Transfer Mode (ATM) cells and the network comprises an ATM network.
  • 11. A computer program stored on a computer-readable medium for serving data units on a network, the computer program comprising instructions to: queue data units from a first application in a first buffer; queue data units from a second application in a second buffer; move data units from the second buffer to the first buffer following a predetermined delay; and serve data units from the first buffer.
  • 12. The computer program of claim 11, wherein the data units from the first application have a higher priority for transmission on the network than the data units from the second application.
  • 13. The computer program of claim 11, wherein the data units from the first application and the data units from the second application are served from the first buffer on a first-come-first-served basis.
  • 14. The computer program of claim 11, further comprising instructions to: discard data units from the first application that exceed a first time delay; and discard data units from the second application that exceed a second time delay.
  • 15. The computer program of claim 14, wherein the second time delay exceeds the predetermined time delay.
  • 16. The computer program of claim 11, wherein the data units from the second buffer are moved to an end of the first buffer after data units from the first application.
  • 17. The computer program of claim 11, further comprising instructions to: determine a time to move the data units from the second buffer to the first buffer.
  • 18. The computer program of claim 17, wherein a circular buffer, a pointer and a timer are used to determine the time to move the data units from the second buffer to the first buffer.
  • 19. The computer program of claim 11, further comprising instructions to: serve data units from the second buffer when the first buffer is empty.
  • 20. The computer program of claim 11, wherein the data units comprise Asynchronous Transfer Mode (ATM) cells and the network comprises an ATM network.
  • 21. An apparatus for serving data units on a network, comprising: a first buffer to queue data units from a first application; a second buffer to queue data units from a second application; and a controller to (i) move data units from the second buffer to the first buffer following a predetermined delay, and (ii) serve data units from the first buffer.
  • 22. The apparatus of claim 21, wherein the data units from the first application have a higher priority for transmission on the network than the data units from the second application.
  • 23. The apparatus of claim 21, wherein the data units from the first application and the data units from the second application are served from the first buffer on a first-come-first-served basis.
  • 24. The apparatus of claim 21, wherein the controller discards data units from the first application that exceed a first time delay, and discards data units from the second application that exceed a second time delay.
  • 25. The apparatus of claim 24, wherein the second time delay exceeds the predetermined time delay.
  • 26. The apparatus of claim 21, wherein the data units from the second buffer are moved to an end of the first buffer after data units from the first application.
  • 27. The apparatus of claim 21, wherein the controller determines a time to move the data units from the second buffer to the first buffer.
  • 28. The apparatus of claim 27, wherein the controller uses a circular buffer, a pointer and a timer to determine the time to move the data units from the second buffer to the first buffer.
  • 29. The apparatus of claim 21, wherein the controller serves data units from the second buffer when the first buffer is empty.
  • 30. The apparatus of claim 21, wherein the data units comprise Asynchronous Transfer Mode (ATM) cells and the network comprises an ATM network.
CROSS-REFERENCE TO RELATED APPLICATION

[0001] This application claims priority to U.S. Provisional Patent Application No. 60/295,601, filed Jun. 4, 2001, entitled “Non-Copying Buffer Handling For Porting A Protocol Stack To Drivers”, the contents of which are hereby incorporated by reference into this application as if set forth herein in full.

Provisional Applications (1)
Number Date Country
60295601 Jun 2001 US