Packet Sending Method, Device, and System

Information

  • Patent Application
  • 20200336435
  • Publication Number
    20200336435
  • Date Filed
    June 30, 2020
    4 years ago
  • Date Published
    October 22, 2020
    4 years ago
Abstract
A packet sending method, implemented by a network device, comprises receiving a packet, identifying that a flow to which the packet belongs is delay-sensitive traffic for a reserved resource, where the reserved resource includes a quantity of packets that can be sent for the flow in one time window, and arranging, based on the quantity of packets that can be sent for the flow in one time window, and an accumulated quantity of packets that have been sent in the time window, the packet in a specific time window for sending.
Description
TECHNICAL FIELD

Embodiments of this application relate to the network transmission field, and in particular, to a packet sending method, device, and system.


BACKGROUND

A delay-sensitive network is usually a communications network applied in a special field such as industrial control. Usually, such a network has an upper limit requirement for an end-to-end delay of specific traffic from a transmit end to a receive end. If a time at which a packet arrives at a destination is later than a committed time, the packet may become invalid due to loss of timeliness.


Usually, for delay-sensitive traffic, a specific resource needs to be reserved for the traffic at nodes, ports, and the like on an end-to-end path in a network in which the traffic is located, to avoid unpredictable congestion in a transmission process of the traffic and an additional queuing delay.


In other approaches, a global clock synchronization method is currently used to send delay-sensitive traffic. In the method, first, a strict synchronization requirement is imposed on clocks of all nodes in an entire network, and second, a unified time window pattern is maintained for the entire network. An ingress queue is statistically configured for each port in each time window on all the nodes in the entire network. After receiving a packet, a network device adds the packet to a queue of a corresponding egress port based on a time window in which a global clock is currently located. The queue to which the packet is added is opened in a next time window, and the packet is scheduled and sent.


To meet an upper limit requirement for a delay of the delay-sensitive traffic, the foregoing method has two constraints during use. Constraint 1 is a packet sent by an upstream network device in a time window N needs to be received in a time window N of a next network device. Constraint 2 is a packet received by a current network device in a time window N needs to enter a queue in the time window N. In this way, in a process in which a packet is sent from a source end to a destination end, on each network device that the packet passes, it can be ensured that the packet that is received by a network device in an Nth time window can be definitely sent by the network device in an (N+1)th time window, and the packet is received in an (N+1)th time window of a next node. Then a maximum value of an end-to-end delay of the packet is (K+1)×T, where K is a hop count of the packet in the network, and T is a globally unified time window width. In this way, a committed transmission delay is available.


According to the foregoing method, enqueue timing and dequeue timing of the packet are shown in FIG. 1. It can be learned from FIG. 1 that, because there may be a transmission delay between network devices and there may be a packet processing delay within a network device, the packet sent by the source end needs to undergo a transmission delay and a packet processing delay before arriving at a queue of a next network device at an end of a time window shown in FIG. 1. Therefore, the source end needs to send the packet not later than a specific time point, otherwise, it cannot be ensured that the packet enters the queue in a same time window.


In other words, to ensure that the delay-sensitive traffic has a committed upper delay limit, the source end can use only a very small part of time in each time window to send the delay-sensitive traffic. For example, it is assumed that an entire-network unified time window time is 30 microseconds (μs), a distance between network nodes is 1 km, a packet processing delay of a network node is 20 μs, and a link transmission rate is 10 Gigabits per second (Gbps). According to the foregoing method, a bandwidth available for the delay-sensitive traffic is lower than 1.6 Gbps (a 1-kilometer (km) optical fiber has a 5-μs transmission delay, and a 5-μs transmission delay and a 20-μs packet processing delay are unavailable in a 30-μs time window, therefore, the available bandwidth is (30 μs−20 μs−5 μs)/30 μs×10 Gbps=1.6 Gbps). It can be learned that, in other approaches, the bandwidth available for the delay-sensitive traffic is relatively low, and resource utilization is low.


SUMMARY

Embodiments of this application provide a packet sending method, device, and system, and a storage medium, to increase bandwidth used for delay-sensitive traffic and improve bandwidth utilization.


According to a first aspect, an embodiment of this application provides a packet sending method. The method is applied to a network device in a transmission system. In the method, the network device receives a packet, and identifies that a flow to which the packet belongs is delay-sensitive traffic for a reserved resource, where the reserved resource includes a quantity of packets that can be sent for the flow in one time window. Then the network device arranges, based on the quantity of packets that can be sent for the flow in one time window, and an accumulated packet sending status in the time window, and a quantity of packets already in a queue used to send the flow, the packet in a specific time window for sending.


In this embodiment of this application, an output time window of a packet is dynamically determined based on real-time information (for example, an accumulated packet sending status in one time window), but not determined fully based on a static configuration. In this manner, flexibility of a packet sending process can be increased. A transmission device only needs to ensure that a quantity of packets sent in each time window meets a quantity of packets that can be sent in one time window, but does not need to constrain a sending time of each packet. In other words, after a quantity of packets sent in one time window reaches a quantity of packets that can be sent, a next packet is arranged by the network device in a next time window for sending. Therefore, in this manner, a packet sent by an upstream device at any time point in one time window may be arranged by the network device in a proper time window for sending. This avoids a constraint on a time at which the upstream device sends a packet, increases available bandwidth for delay-sensitive traffic, and reduces resource waste of bandwidth.


In a possible implementation, queue resource reservation information and traffic resource reservation information used to send the flow are preconfigured in the network device. The queue resource reservation information includes a queue used to send the flow, and enqueue timing and output timing of the queue, where the enqueue timing is used to define an ingress queue of each time window, and the output timing is used to define an open/closed state of each queue in each time window. The traffic resource reservation information records a current ingress queue of the flow, and a packet count used to represent a quantity of packets in the current ingress queue. The accumulated packet sending status in the time window is the quantity of packets in the current ingress queue.


Correspondingly, that the network device arranges, based on the quantity of packets that can be sent for the flow in one time window and an accumulated quantity of packets that have been sent in the time window, the packet in a specific time window for sending includes determining, by the network device, an arrival time window of the packet, where the arrival time window is a time window of the packet at an egress port of the network device when the packet arrives at the network device, querying for the quantity of packets in the current ingress queue, determining an ingress queue of the flow based on the arrival time window, the enqueue timing of the queue, the quantity of packets that can be sent for the flow in one time window, and the quantity of packets in the current ingress queue, adding the packet to the determined ingress queue of the flow, and opening, in a time window that is defined in the output timing and that is for opening a queue in which the packet is located, the queue in which the packet is located, and sending the packet.


In this embodiment, packets can be sent in each time window based on a required quantity. Therefore, regarding a result, delay-sensitive traffic still has a committed end-to-end delay. In other words, in this embodiment, a total delay of a packet in a transmission process can be controlled by defining the time window-based enqueue timing and output timing, without a need of strictly controlling a delay in each network device by constraining a sending time. This can ensure that delay-sensitive traffic has a committed end-to-end delay, and can also increase available bandwidth for the delay-sensitive traffic and reduce resource waste of bandwidth.


In a possible implementation, the determining, by the network device, an ingress queue of the flow based on the arrival time window, the enqueue timing of the queue, the quantity of packets that can be sent for the flow in one time window, and the quantity of packets in the current ingress queue further includes, if the quantity of packets in the current ingress queue has reached the quantity of packets that can be sent in one time window, determining, by the network device, an ingress queue of a next time window of the arrival time window as the ingress queue of the flow, or if the quantity of packets in the current ingress queue has not reached the quantity of packets that can be sent in one time window, determining an ingress queue of the arrival time window as the ingress queue of the flow.


In this embodiment, an ingress queue is dynamically switched, and after a quantity of packets sent in one time window reaches a quantity of packets that can be sent, a next packet is arranged in a next time window for sending, thereby increasing flexibility. This increases available bandwidth for delay-sensitive traffic, and can also ensure that packets sent in a time window meet a requirement for a quantity of packets that can be sent in the time window, and meet a traffic characteristic requirement of the delay-sensitive traffic.


In a possible implementation, in the enqueue timing, a time window further has an alternative ingress queue, and an alternative ingress queue of a time window is an ingress queue of a next time window.


Correspondingly, the determining, by the network device, an ingress queue of the flow based on the arrival time window, the enqueue timing of the queue, the quantity of packets that can be sent for the flow in one time window, and the quantity of packets in the current ingress queue further includes, if the quantity of packets in the current ingress queue has reached the quantity of packets that can be sent in one time window, determining, by the network device, an alternative ingress queue of the arrival time window as the ingress queue of the flow, or if the quantity of packets in the current ingress queue has not reached the quantity of packets that can be sent in one time window, determining an ingress queue of the arrival time window as the ingress queue of the flow.


In this embodiment, the alternative ingress queue is set, and an ingress queue of a next time window of the arrival time window does not need to be queried for in a packet sending process, thereby improving processing efficiency.


In a possible implementation, in the output timing, an ingress queue of an Mth time window is in an open state in an (M+1)th time window, and is in a closed state in another time window, where M is an integer greater than or equal to 1.


In this embodiment, the output timing is controlled to ensure that a packet received by the network device in a local Nth time window is sent in a local (N+1)th (or (N+2)th) time window. This can ensure that a maximum value of an end-to-end delay is a sum of ((End-to-end hop count+1)×(Time window size))+Time window boundary difference on a path. Therefore, in this embodiment of this application, available bandwidth for delay-sensitive traffic can be increased, and an end-to-end delay of delay-sensitive traffic can also be better ensured.


In a possible implementation, the method further includes, after the ingress queue of the flow is determined, updating, by the network device based on the determined ingress queue, the ingress queue recorded in the traffic resource reservation information, and restoring, by the network device, the packet count recorded in the traffic resource reservation information to an initial value each time the network device updates the recorded ingress queue, and performing accumulation on the packet count each time the network device adds a packet to the updated ingress queue.


In this embodiment, dynamic information such as the packet count is recorded, and directly applied to a packet sending process. This can reduce calculation in the packet sending process, and improve processing efficiency.


In a possible implementation, the network device reserves a resource for the flow in advance, and the traffic resource reservation information is configured in the resource reservation process.


In this embodiment, a delay of delay-sensitive traffic in a transmission process can be reduced by reserving the resource.


In a possible implementation, queue resource reservation information and traffic resource reservation information used to send the flow are preconfigured in the network device. The queue resource reservation information includes a queue in a one-to-one correspondence with the flow, and a dequeue gating configured for the queue, where the dequeue gating is used to control a quantity of packets sent in each time window. The traffic resource reservation information includes the quantity of packets that can be sent for the flow in one time window.


Correspondingly, that the network device arranges, based on the quantity of packets that can be sent for the flow in one time window and an accumulated packet sending status in the time window, the packet in a specific time window for sending further includes adding, by the network device, the packet to a queue corresponding to the flow to which the packet belongs, and extracting, based on the dequeue gating, the packet from the queue corresponding to the flow, and sending the packet, where the dequeue gating is updated based on a time window, and an initial value of the dequeue gating in each time window is a quantity of packets that can be sent for the flow corresponding to the queue in one time window, and decreases progressively based on a quantity of packets sent in each time window.


In this embodiment, a time window is not limited during enqueue, but packet sending in each time window is controlled in a dequeue process. Therefore, no requirement is imposed on a time at which a packet is received, and no constraint is imposed on a sending time of an upstream device. The upstream device can send delay-sensitive traffic nearly in an entire time window, thereby increasing available bandwidth for delay-sensitive traffic and reducing resource waste of bandwidth.


In a possible implementation, the network device monitors a time window update, and each time a time window is updated, obtains, from the traffic resource reservation information, the quantity of packets that can be sent for the flow in one time window, and updates the dequeue gating based on the quantity of packets that can be sent for the flow in one time window.


In this embodiment, the dequeue gating is updated based on a time window and the quantity of packets that can be sent for the flow in one time window. This ensures that packets sent in each time window meet a requirement for a quantity of packets that can be sent in the time window, and meet a traffic characteristic requirement of the delay-sensitive traffic.


In a possible implementation, the extracting, based on the dequeue gating, the packet from the queue corresponding to the flow, and sending the packet further includes checking, by the network device in real time, a packet in the queue corresponding to the flow and a token in the token bucket, and if there is a packet in the queue corresponding to the flow and there is a token in the token bucket, extracting and sending the packet, until there is no token in the token bucket or there is no packet in the queue corresponding to the flow.


In this embodiment, a quantity of packets sent in each time window can be ensured. This can ensure that an end-to-end delay has a committed upper limit. Therefore, in this embodiment of this application, available bandwidth for delay-sensitive traffic can be increased, and it can also be ensured that the delay-sensitive traffic has a committed end-to-end delay.


In a possible implementation, the network device reserves the resource for the flow in advance, and the traffic resource reservation information and the queue resource reservation information are configured in the resource reservation process.


In this embodiment, a queue resource is allocated based on a flow, but not allocated based on a time window. Therefore, time windows in an entire network do not need to be aligned. This solution may be deployed on a device not supporting time alignment, thereby extending applicability.


According to a second aspect, an embodiment of this application provides a packet sending management method. The method is applied to a network device that can transmit delay-sensitive traffic. In the method, the network device receives a packet, and identifies that a flow to which the packet belongs is delay-sensitive traffic for a reserved resource. The network device obtains an arrival time window of the packet and traffic resource reservation information of the flow. The traffic resource reservation information records a quantity of packets that can be sent for the flow in one time window, and a quantity of packets in a current ingress queue. The network device determines an ingress queue of the flow based on the arrival time window, the quantity of packets that can be sent for the flow in one time window, and the quantity of packets in the current ingress queue, adds the packet to the ingress queue, and sends the packet through queue scheduling.


In this embodiment of this application, the network device determines the ingress queue of the flow based on the arrival time window of the packet, the quantity of packets that can be sent for the flow in one time window, and the quantity of packets in the current ingress queue, dynamically adds packets to different queues, and performs scheduling and output. In this manner, flexibility of a packet sending process can be increased. A transmission device needs only to ensure that a quantity of packets sent in each time window meets a quantity of packets that can be sent in one time window, but does not need to constrain a sending time of each packet. In other words, after a quantity of packets sent in one time window reaches a quantity of packets that can be sent, a next packet is arranged by the network device in a next time window for sending. Therefore, in this manner, a packet sent by an upstream device at any time point in one time window may be arranged by the network device in a proper time window for sending. This avoids a constraint on a time at which the upstream device sends a packet, increases available bandwidth for delay-sensitive traffic, and reduces resource waste of bandwidth.


In a possible implementation, queue resource reservation information and traffic resource reservation information used to send the flow are preconfigured in the network device. The queue resource reservation information includes the queue used to send the flow and enqueue timing of the queue, where the enqueue timing is used to define an ingress queue of each time window. That the network device determines an ingress queue of the flow based on the arrival time window, the quantity of packets that can be sent for the flow in one time window, and the quantity of packets in the current ingress queue includes, if the quantity of packets in the current ingress queue has reached the quantity of packets that can be sent in one time window, determining, by the network device, an ingress queue, in the enqueue timing, of a next time window of the arrival time window as the ingress queue of the flow, or if the quantity of packets in the current ingress queue has not reached the quantity of packets that can be sent in one time window, determining an ingress queue, in the enqueue timing, of the arrival time window as the ingress queue of the flow.


In a possible implementation, the queue resource reservation information further includes output timing, where the output timing is used to define an open/closed state of each queue in each time window. The sending the packet through queue scheduling includes opening, by the network device in a time window that is defined in the output timing and that is for opening a queue in which the packet is located, the queue in which the packet is located, and sending the packet.


According to a third aspect, an embodiment of this application provides a network device. The network device includes the receiving module configured to receive a packet, and the processing module configured to identify that a flow to which the packet belongs is delay-sensitive traffic for a reserved resource, where the reserved resource includes a quantity of packets that can be sent for the flow in one time window. Then the processing module arranges, based on the quantity of packets that can be sent for the flow in one time window, and an accumulated quantity of packets that have been sent in the time window, the packet in a specific time window for sending.


In this embodiment of this application, an output time window of a packet is dynamically determined based on real-time information (for example, an accumulated packet sending status in one time window), but not determined fully based on a static configuration. In this manner, flexibility of a packet sending process can be increased. A transmission device needs only to ensure that a quantity of packets sent in each time window meets a quantity of packets that can be sent in one time window, but does not need to constrain a sending time of each packet. In other words, after a quantity of packets sent in one time window reaches a quantity of packets that can be sent, a next packet is arranged by the network device in a next time window for sending. Therefore, in this manner, a packet sent by an upstream device at any time point in one time window may be arranged by the network device in a proper time window for sending. This avoids a constraint on a time at which the upstream device sends a packet, increases available bandwidth for delay-sensitive traffic, and reduces resource waste of bandwidth.


In a possible implementation, the network device further includes a first storage module. The first storage module is configured to store preconfigured queue resource reservation information and traffic resource reservation information that are used to send the flow. The queue resource reservation information includes the queue used to send the flow, and enqueue timing and output timing of the queue, where the enqueue timing is used to define an ingress queue of each time window, and the output timing is used to define an open/closed state of each queue in each time window. The traffic resource reservation information records a current ingress queue of the flow, and a packet count used to represent a quantity of packets in the current ingress queue. The accumulated packet sending status in the time window is the quantity of packets in the current ingress queue.


Correspondingly, that the processing module arranges, based on the quantity of packets that can be sent for the flow in one time window and an accumulated quantity of packets that have been sent in the time window, the packet in a specific time window for sending further includes determining, by the processing module, an arrival time window of the packet, where the arrival time window is a time window of the packet at an egress port of the network device when the packet arrives at the network device, querying for the quantity of packets in the current ingress queue, then determining, by the processing module, an ingress queue of the flow based on the arrival time window, the enqueue timing of the queue, the quantity of packets that can be sent for the flow in one time window, and the quantity of packets in the current ingress queue, and adding the packet to the determined ingress queue of the flow, and then opening, in a time window that is defined in the output timing and that is for opening a queue in which the packet is located, the queue in which the packet is located, and sending the packet.


In this embodiment, packets can be sent in each time window based on a required quantity. Therefore, regarding a result, delay-sensitive traffic still has a committed end-to-end delay. In other words, in this embodiment, a total delay of a packet in a transmission process can be controlled by defining the time window-based enqueue timing and output timing, without a need of strictly controlling a delay in each network device by constraining a sending time. This can ensure that delay-sensitive traffic has a committed end-to-end delay, and can also increase available bandwidth for the delay-sensitive traffic and reduce resource waste of bandwidth.


In a possible implementation, the determining, by the processing module, an ingress queue of the flow based on the arrival time window, the enqueue timing of the queue, the quantity of packets that can be sent for the flow in one time window, and the quantity of packets in the current ingress queue further includes, if the quantity of packets in the current ingress queue has reached the quantity of packets that can be sent in one time window, determining, by the processing module, an ingress queue of a next time window of the arrival time window as the ingress queue of the flow, or if the quantity of packets in the current ingress queue has not reached the quantity of packets that can be sent in one time window, determining an ingress queue of the arrival time window as the ingress queue of the flow.


In this embodiment, an ingress queue is dynamically switched, and after a quantity of packets sent in one time window reaches a quantity of packets that can be sent, a next packet is arranged in a next time window for sending, thereby increasing flexibility. This increases available bandwidth for delay-sensitive traffic, and can also ensure that packets sent in a time window meet a requirement for a quantity of packets that can be sent in the time window, and meet a traffic characteristic requirement of the delay-sensitive traffic.


In a possible implementation, the network device further includes a resource reservation module. The resource reservation module is configured to reserve a resource for the flow in advance, and the traffic resource reservation information is configured in the resource reservation process.


In a possible implementation, the network device further includes a second storage module. The second storage module is configured to store preconfigured queue resource reservation information and traffic resource reservation information that are used to send the flow. The queue resource reservation information includes a queue in a one-to-one correspondence with the flow, and a dequeue gating configured for the queue, where the dequeue gating is used to control a quantity of packets sent in each time window. The traffic resource reservation information includes the quantity of packets that can be sent for the flow in one time window.


Correspondingly, that the processing module arranges, based on the quantity of packets that can be sent for the flow in one time window and an accumulated packet sending status in the time window, the packet in a specific time window for sending further includes adding, by the processing module, the packet to a queue corresponding to the flow to which the packet belongs, and extracting, based on the dequeue gating, the packet from the queue corresponding to the flow, and sending the packet, where the dequeue gating is updated based on a time window, and an initial value of the dequeue gating in each time window is a quantity of packets that can be sent for the flow corresponding to the queue in one time window, and decreases progressively based on a quantity of packets sent in each time window.


In this embodiment, a time window is not limited during enqueue, but packet sending in each time window is controlled in a dequeue process. Therefore, no requirement is imposed on a time at which a packet is received, and no constraint is imposed on a sending time of an upstream device. The upstream device can send delay-sensitive traffic nearly in an entire time window, thereby increasing available bandwidth for delay-sensitive traffic and reducing resource waste of bandwidth.


In a possible implementation, the network device further includes a second resource reservation module. The second resource reservation module is configured to reserve a resource for the flow in advance, and the traffic resource reservation information and the queue resource reservation information are configured in the resource reservation process.


In this embodiment, a queue resource is allocated based on a flow, but not allocated based on a time window. Therefore, time windows in an entire network do not need to be aligned. This solution may be deployed on a device not supporting time alignment, thereby extending applicability.


According to a fourth aspect, an embodiment of this application provides a packet sending system. The system includes a network control plane and at least one network device. After accepting a traffic application request sent by a source-end device, the network control plane sends a traffic application success notification to the at least one network device on a path on which the flow is located, where the notification includes information about the flow that applies for resource reservation. The network device is configured to perform resource reservation configuration for the flow based on the information about the flow in the notification, receive a packet, and identify that a flow to which the packet belongs is delay-sensitive traffic for a reserved resource, where the reserved resource includes a quantity of packets that can be sent for the flow in one time window. Then the network device arranges, based on the quantity of packets that can be sent for the flow in one time window, and an accumulated quantity of packets that have been sent in the time window, the packet in a specific time window for sending.


In this embodiment of this application, an output time window of a packet is dynamically determined based on real-time information (for example, an accumulated packet sending status in one time window), but not determined fully based on a static configuration. In this manner, flexibility of a packet sending process can be increased. A transmission device needs only to ensure that a quantity of packets sent in each time window meets a quantity of packets that can be sent in one time window, but does not need to constrain a sending time of each packet. In other words, after a quantity of packets sent in one time window reaches a quantity of packets that can be sent, a next packet is arranged by the network device in a next time window for sending. Therefore, in this manner, a packet sent by an upstream device at any time point in one time window may be arranged by the network device in a proper time window for sending. This avoids a constraint on a time at which the upstream device sends a packet, increases available bandwidth for delay-sensitive traffic, and reduces resource waste of bandwidth.


In a possible implementation, the network device is further configured to perform any method according to the possible implementations of the first aspect.


According to a fifth aspect, an embodiment of this application provides a network device. The network device includes a processor. The processor is coupled to a memory. When executing a program in the memory, the processor implements any method according to the first aspect or the possible implementations of the first aspect.


For an effect of implementations of the network device in this embodiment, refer to descriptions of a corresponding part in the foregoing and descriptions of a related part in the specification. Details are not described herein.


According to a six aspect, an embodiment of this application provides a computer-readable storage medium. The computer-readable storage medium stores program code. The program code is used to instruct to perform any method according to the first aspect or the possible implementations of the first aspect.


For an effect of implementations of the computer-readable storage medium, refer to descriptions of a corresponding part in the foregoing and descriptions of a related part in the specification. Details are not described herein.





BRIEF DESCRIPTION OF DRAWINGS


FIG. 1 is a schematic diagram of enqueue timing and dequeue timing of a packet according to an embodiment of this application.



FIG. 2 is a schematic architectural diagram of a packet transmission system according to an embodiment of this application.



FIG. 3 is a flowchart of a traffic resource reservation method according to an embodiment of this application.



FIG. 4 is a schematic diagram of a scheduling priority according to an embodiment of this application.



FIG. 5 is a flowchart of a method for sending a packet by a network device according to an embodiment of this application.



FIG. 6 is a flowchart of another method for sending a packet by a network device according to an embodiment of this application.



FIG. 7 is a flowchart of still another method for sending a packet by a network device according to an embodiment of this application.



FIG. 8 is a flowchart of packet dequeue according to an embodiment of this application.



FIG. 9 is a flowchart of a token bucket update method according to an embodiment of this application.



FIG. 10 is a schematic structural diagram of a network device according to an embodiment of this application.



FIG. 11 is a schematic structural diagram of another network device according to an embodiment of this application.



FIG. 12 is a schematic structural diagram of still another network device according to an embodiment of this application.





DESCRIPTION OF EMBODIMENTS

The following further describes the present disclosure in detail with reference to the accompanying drawings.


Embodiments of this application may be applied to a transmission system that can transmit delay-sensitive traffic. The transmission system may be located in a layer 2 switching network or a layer 3 switching network. FIG. 2 is a schematic structural diagram of a transmission system according to an embodiment of this application. The transmission system may include a source-end device 201, at least one network device 202, a destination device 203, and a network control plane 204.


The source-end device 201 may be a control host in an industrial control network scenario, an industrial sensor, a sensor in an Internet of Things scenario, or the like. The network device 202 may be a switching device, a router, or the like. The destination device 203 may be an executor (for example, a servo motor in an industrial control network scenario, or an information processing center in an Internet of Things scenario). The network control plane 204 may be a controller or a management server.


At an initialization stage of the transmission system, the network control plane 204 sets a uniform time window size (for example, 125 us is set as one time window), and sends the specified time window size to all network devices 202 in the transmission system. It should be noted that the network control plane 204 may alternatively send the time window size only to a network device 202 that participates in transmission of delay-sensitive traffic. The network control plane 204 may determine, based on a network topology and according to an existing packet forwarding rule, the network device 202 that participates in transmission of delay-sensitive traffic. The network device 202 configures a boundary phase of a time window of each local port based on the time window size specified by the network control plane 204. Through the foregoing initialization, time windows of the network devices 202 in the transmission system are set to be time windows that have a same size but whose boundaries may not be aligned. In a subsequent packet transmission process, a current time window may be obtained through calculation or table lookup based on a real time.


In the delay-sensitive network field, a time window is an important concept. The time window is a consecutive period of time. Usually, the network control plane 204 divides a time of an egress port of the network device into a plurality of time windows that periodically cycle, for example, “a time window 1, a time window 2, a time window 3, a time window 1, . . . ”. Each time window has a specific data sending capability based on a link rate. For example, for a 10-Gigabit (Gbit) link and a 125-μs time window, 1250-kilobit (Kbit) data (approximately 100 1.5-kilobyte (KB) packets) may be sent in one time window. Therefore, in a subsequent packet transmission process, packet enqueue and dequeue may be controlled based on a time window.


When the transmission system is applied in a scenario for transmitting delay-sensitive traffic, before a packet is transmitted, a traffic resource needs to be reserved for the delay-sensitive traffic in network devices on an end-to-end path on which the delay-sensitive traffic is to be transmitted in order to avoid unpredictable congestion of the delay-sensitive traffic and an additional queuing delay. The traffic resource reservation process is shown in FIG. 3.


S301. A source-end device sends a traffic application request to a network control plane.


Because an end-to-end delay of delay-sensitive traffic needs to be ensured, traffic application usually needs to be performed before the delay-sensitive traffic is sent. The traffic application is usually performed based on “several packets per time window” (usually referred to as a traffic characteristic) or a corresponding traffic rate.


The traffic application request carries information about a flow and a traffic characteristic that are required for the application. The information about the flow may include information (for example, a source address, a destination address, a destination port number, a differentiated services code point (DSCP), or a protocol) that can identify the flow.


S302. The network control plane determines, based on a remaining sending capability of a corresponding egress port in each network device on a path on which the flow is located, whether to accept the request.


The corresponding egress port in the network device is a port, in the network device, that is configured to send the flow. The network control plane may determine a transmission path of the flow based on a network topology of a transmission system and according to an existing packet forwarding rule. The transmission path includes a network device transmitting the flow, and an egress port in the network device.


If the traffic characteristic requested in the traffic application request is greater than a current remaining sending capability of any egress port on the path, the request fails, and the network control plane rejects the request and feeds back failure information to the source-end device. If the traffic characteristic requested in the traffic application request is less than or equal to a current remaining sending capability of any egress port on the path, the request succeeds, and the network control plane accepts the request, feeds back success information to the source-end device, and updates a sending capability of an egress port in each network device on the path. For example, for a 125-μs and 10-Gbps network, a flow 1 applies for sending 90 1.5-KB packets in each period, and the application succeeds. After the application of the flow 1 is completed, a sending capability of approximately 10 packets remains in each time window. Then a flow 2 applies for sending 50 1.5-KB packets in a time window, and the request fails.


A sending capability of a port is a maximum quantity of packets that can be sent by the port in one time window. The sending capability may be obtained through calculation based on link bandwidth, a window size, and a maximum packet size.


S303. After accepting the traffic application request, the network control plane sends a traffic application success notification to each network device on the path on which the flow is located.


The network control plane may send the notification using a control packet. The notification includes the information about the flow that applies for resource reservation and the traffic characteristic. The information about the flow and the traffic characteristic may be obtained from the traffic application request in step S301.


It should be noted that only one network device is shown in the figure, and the network control plane actually sends traffic application success notifications to all network devices on the path.


S304. Each network device that receives the notification performs traffic resource reservation configuration for the flow.


The traffic resource reservation configuration mainly includes the following steps.


1. Record the information about the flow. Further, the information about the flow may be updated to a flow table of the network device. In a subsequent packet transmission process, the network device may identify the flow based on the flow table. The flow may be identified using a method of parsing a source or destination Internet Protocol (IP), a port number, a DSCP, a protocol, or the like of the flow.


2. Configure traffic resource reservation information for the flow, where the traffic resource reservation information includes the traffic characteristic (that is, a quantity of packets that can be sent for the flow in one time window).


After the foregoing resource reservation procedure is completed, the network device may transmit the flow based on the information configured in the resource reservation process.


In the packet transmission process in this embodiment of this application, for a packet of delay-sensitive traffic, the network device arranges, based on the quantity of packets that can be sent for the flow in one time window and an accumulated packet sending status in one time window, the packet in a specific time window for sending.


A difference from other approaches lies in what follows. An output time window of a packet is dynamically determined based on real-time information (for example, an accumulated packet sending status in one time window), but not determined fully based on a static configuration. In this manner, flexibility of a packet sending process can be increased. A transmission device needs only to ensure that a quantity of packets sent in each time window meets a quantity of packets that can be sent in one time window, but does not need to constrain a sending time of each packet. In other words, after a quantity of packets sent in one time window reaches a quantity of packets that can be sent, a next packet is arranged by the network device in a next time window for sending. Therefore, according to this solution in which a sending time window of a packet is arranged based on a quantity of packets that can be sent in one time window and an accumulated packet sending status in one time window, a packet sent by an upstream device at any time point in one time window may be arranged by the network device in a proper time window for sending. This avoids a constraint on a time at which the upstream device sends a packet, and the upstream device can send delay-sensitive traffic nearly in an entire time window. In addition, a sending time window of a packet is determined based on a quantity of packets that can be sent in one time window and an accumulated packet sending status in one time window, therefore, it can be ensured that packets sent in each time window meet a requirement for a quantity of packets that can be sent in the time window, and meet a traffic characteristic requirement of delay-sensitive traffic. Moreover, packets can be sent in each time window based on a required quantity. Therefore, regarding a result, delay-sensitive traffic still has a committed end-to-end delay. Therefore, according to this dynamic arrangement mechanism used in this embodiment of this application, available bandwidth for delay-sensitive traffic can be increased, and resource waste of bandwidth can be reduced on premises that the delay-sensitive traffic has a committed end-to-end delay and that output traffic of all network devices in a network still meet a traffic characteristic.


The following describes in detail the method for transmitting a packet of delay-sensitive traffic in this embodiment of this application.


For ease of understanding, first, a queue mechanism of a network device is briefly described.


In the network device, a queue used to cache packets is set for each port. After entering the network device, a packet first enters a cache queue of an egress port, then leaves the queue according to a queue scheduling mechanism for sending. To better differentiate traffic levels, the network device usually uses different levels of queue mechanisms. In other words, a plurality of levels of cache queues may be set for one port in the network device. In dequeue scheduling, a high-priority queue is preferentially scheduled. FIG. 4 is a schematic diagram of a scheduling priority. In FIG. 4, a queue 1, a queue 2, and a queue 3 are highest-priority queues, and a queue 4 to a queue 8 are low-priority queues. If the queue 2, and the queue 4 to the queue 8 are all open, the network device preferentially schedules a packet in the queue 2 for sending, and then schedules a packet in the queue 4 to the queue 8. For a network device that can transmit delay-sensitive traffic, the network device reserves in advance, on an egress port used to send delay-sensitive traffic, queues used to send the delay-sensitive traffic. These queues are usually queues with highest priorities, such as the queue 1, the queue 2, and the queue 3 in FIG. 4.


According to the foregoing queue mechanism, the network device usually includes two processes in a packet sending process an enqueue process of adding a received packet to a queue, and a dequeue process of scheduling the packet from the queue for sending.


For these two processes, this embodiment of this application proposes two packet sending optimization solutions. Solution 1 is mainly related to an improvement on the packet enqueue process. In this solution, packet enqueue may be controlled based on a quantity of packets that can be sent in one time window and an accumulated packet sending status in one time window. The network device may dynamically determine, based on the quantity of packets that can be sent in one time window and the accumulated packet sending status in one time window, an ingress queue of a flow to which a packet belongs, and update a preconfigured ingress queue of the flow.


Solution 2 is mainly related to an improvement on the packet dequeue process. In this solution, packet dequeue may be controlled based on a quantity of packets that can be sent in one time window and an accumulated packet sending status in one time window. The network device may set a dequeue gating based on the quantity of packets that can be sent in one time window, and control a packet dequeue quantity in one time window based on the dequeue gating.


The following describes in detail the two solutions in Embodiment 1 and Embodiment 2.


Embodiment 1

In this embodiment, a network device reserves at least three queues for delay-sensitive traffic in advance, and configures enqueue timing and output timing for these queues based on a time window. The enqueue timing is timing at which each queue becomes an ingress queue. The ingress queue is a queue that a packet can enter. The time window-based enqueue timing is used to define an ingress queue corresponding to each time window. The enqueue timing is used at a packet enqueue stage, and is used to determine a packet ingress queue in each time window. The output timing is timing for opening each queue at a dequeue scheduling stage. The time window-based output timing is used to define an open/closed state of each queue in each time window. The output timing is used at a packet dequeue stage, and is used to control opening/closing of each queue in each time window. The enqueue timing and the output timing may be stored using a table structure, or may be stored using another storage structure (such as an array), as shown in Table 1 and Table 2.











TABLE 1





Time window sequence




number
Ingress queue
Alternative ingress queue







Time window KN + 1
Queue 2
Queue 3


Time window KN + 2
Queue 3
Queue 4


Time window KN + 3
Queue 4
Queue 5


. . .
. . .
. . .


Time window KN + K − 1
Queue K
Queue 1


Time window KN + K
Queue 1
Queue 2






















TABLE 2







Time
Queue 1
Queue 2
Queue 3
. . .
Queue K
Other queues


window
open
closed
closed

closed
open


KN + 1








Time
Queue 1
Queue 2
Queue 3
. . .
Queue K
Other queues


window
closed
open
closed

closed
open


KN + 2








Time
Queue 1
Queue 2
Queue 3
. . .
Queue K
Other queues


window
closed
closed
open

closed
open


KN + 3








. . .








Time
Queue 1
Queue 2
Queue 3
. . .
Queue K
Other queues


window
closed
closed
closed

open
open


KN + K









Table 1 stores the enqueue timing. Table 2 stores the output timing. In Table 1 and Table 2, N is an integer greater than or equal to 0, K is a quantity of queues for sending delay-sensitive traffic, and a value of K is an integer greater than or equal to 3.


In the enqueue timing shown in Table 1, each time window has an alternative ingress queue, in addition to an ingress queue. In this way, when a quantity of packets of a flow that are in an ingress queue of a time window has reached a quantity of packets that can be sent for the flow in one time window, a candidate packet of the flow may be placed in the alternative ingress queue of the time window. An alternative ingress queue of a time window may be set as an ingress queue of a next time window. In this way, an extra packet in the time window may be placed in the ingress queue of the next time window.


In the output timing shown in Table 2, an ingress queue of an Mth time window is in an open state in an (M+1)th time window, and is in a closed state in another time window, where M is an integer greater than or equal to 1. In addition, the queue 1 to the queue K are high-priority queues used to send delay-sensitive traffic, and other queues are lower-priority queues used to send other traffic.


Table 1 and Table 2 are merely examples. In an embodiment, alternatively, no alternative ingress queue may be configured in the enqueue timing. When a quantity of packets in an ingress queue of a time window reaches a quantity of packets that can be sent in one time window, an extra packet is directly placed in an ingress queue of a next time window.


It should be noted that different delay-sensitive traffic may share a same queue. When a queue is shared, enqueue timing and output timing of the queue are also shared.


The foregoing process of reserving the queues and configuring the enqueue timing and the output timing may be referred to as queue resource reservation. The queue resource reservation may be completed at any time before the traffic resource reservation shown in FIG. 3 (for example, may be completed at an initialization stage of a transmission system), or may be completed at a first traffic resource reservation stage at which the queue resource needs to be used.


In this embodiment, in the traffic resource reservation information configured in step S304, the network device further configures an ingress queue option used to record an ingress queue of a flow, and a packet count option used to count packets in the ingress queue. A packet count recorded in the packet count option may represent a quantity of packets in the ingress queue of the flow. An initial ingress queue of the flow that is configured in the ingress queue option is an ingress queue of a first time window in the enqueue timing. The ingress queue of the flow and the packet count of the ingress queue are flow state information, and may be updated depending on a real-time state in a packet sending process. The ingress queue option may be updated based on an ingress queue determined in the packet sending process. The traffic resource reservation information may be stored using a table structure, or may be stored using another storage structure (such as an array). A structure of the traffic resource reservation information is shown in Table 3.












TABLE 3





Flow
Packet




number
count
Resource reservation information
Ingress queue







Flow 1
1
One packet per time window
Queue 2


Flow 2
1
One packet per time window
Queue 2









The network device performs packet transmission based on the information configured in the foregoing queue resource reservation and traffic resource reservation processes. FIG. 5 is a flowchart of a method for sending a packet by a network device according to Embodiment 1.



5
a to 5e are a packet enqueue process, and 5f to 5h are a packet dequeue process.



5
a. A network device receives a packet from an upstream device.


After receiving the packet, the network device determines an egress port of the packet. A method for determining the egress port may be implemented using other approaches. Details are not described herein.



5
b. The network device identifies whether a flow to which the packet belongs is delay-sensitive traffic.


The network device may obtain, through parsing, information in a packet header, such as a source or destination IP, a port, a DSCP, or a protocol number, and searches, using the information obtained through parsing, flow information recorded in a resource reservation process, to identify whether the flow to which the packet belongs is delay-sensitive traffic. This embodiment is described using an example in which the flow to which the packet belongs is delay-sensitive traffic. For a packet of non-delay sensitive traffic, the network device places the packet in another lower-priority queue. Enqueue and scheduling processes of the packet are implemented using other approaches. Details are not described in this embodiment of this application.



5
c. After identifying that the flow is delay-sensitive traffic, the network device determines an arrival time window of the packet.


The arrival time window is a time window of the packet at the egress port of the network device when the packet arrives at the network device.


In a specific implementation process, the network device may first obtain a current time at which the packet is received. The network device may obtain the current time based on a clock crystal oscillator in a transmission system, or may obtain the current time based on time information included in content of the packet.


The network device performs an operation or table lookup based on a port number of the egress port determined in 5a and the current time, to obtain a current time window of the egress port. The calculation or table lookup configuration may be configured during initialization of the transmission system.



5
d. The network device determines an ingress queue of the flow based on the arrival time window of the packet, enqueue timing of a queue, a quantity of packets that can be sent in one time window for the flow to which the packet belongs, and a quantity of packets in a current ingress queue.


The network device may obtain, from a traffic characteristic recorded in traffic resource reservation information, the quantity of packets that can be sent for the flow in one time window, and may obtain the quantity of packets in the current ingress queue from a packet count in the traffic resource reservation information.


Specifically, if the quantity of packets in the current ingress queue has reached the quantity of packets that can be sent in one time window, the network device determines an ingress queue of a next time window of the arrival time window as the ingress queue of the flow, or if the quantity of packets in the current ingress queue has not reached the quantity of packets that can be sent in one time window, the network device determines an ingress queue of the arrival time window as the ingress queue of the flow.


The foregoing determining method is a processing manner in which no alternative ingress queue is set for a time window in the enqueue timing.


An implementation in which an alternative ingress queue is set for a time window in the enqueue timing is as follows.


When setting the enqueue timing, the network device sets an alternative ingress queue of a time window as an ingress queue of a next time window, as shown in Table 1. A process of determining the ingress queue of the flow further includes, if the quantity of packets in the current ingress queue has reached the quantity of packets that can be sent in one time window, determining, by the network device, an alternative ingress queue of the arrival time window as the ingress queue of the flow, or if the quantity of packets in the current ingress queue has not reached the quantity of packets that can be sent in one time window, determining an ingress queue of the arrival time window as the ingress queue of the flow.


After the ingress queue of the flow is determined, the network device may update, based on the determined ingress queue, an ingress queue of the flow that is recorded in the network device. The update is further performing the update if the determined ingress queue has changed relative to the recorded ingress queue.


Each time the network device updates the ingress queue in the traffic resource reservation information, the network device restores the packet count recorded in the traffic resource reservation information to an initial value. The initial value may be set to 0, and may progressively increase subsequently. A value recorded in a progressively increasing manner is the quantity of packets in the current ingress queue. Alternatively, the initial value may be set to any value, and may progressively decrease subsequently. A difference between the initial value and a value recorded in a progressively decreasing manner is the quantity of packets in the current ingress queue.



5
e. The network device adds the packet to the determined ingress queue of the flow.


After adding the packet to the determined ingress queue of the flow, the network device further accumulates the packet count recorded in the traffic resource reservation information. The accumulation herein may be performed in the progressively increasing manner or the progressively decreasing manner. When the initial value of the packet count is set to 0, the accumulation is performed in the progressively increasing manner, or when the initial value of the packet count is set to any value, the accumulation is performed in the progressively decreasing manner.



5
f. The network device obtains a time window of an egress port.


For a method for obtaining the time window of the egress port by the network device, refer to step 5c. Details are not described herein again.



5
g. The network device determines, based on output timing, a queue in an open state in a current time window.


The current time window herein is the time window obtained in step 5f. Assuming that the current time window is the time window 2, in the output timing shown in Table 2, open queues are the queue 2 and another lower-priority queue.



5
h. The network device schedules a packet in the queue in the open state, and sends the packet.


In a scheduling process, a packet in a high-priority queue among open queues is preferentially scheduled and sent. For priority-based scheduling, refer to FIG. 4. Details are not described herein again.


According to the scheduling process from step 5f to step 5h, a queue in which the packet received in step 5a is located is opened in a time window that is defined in the output timing and that is for opening the queue, and then the packet is sent.


In the foregoing solution, the enqueue timing of the queue is based on a time window, and packet dequeue is also scheduled based on a time window. In other words, after the ingress queue of the flow is determined, it is equivalent to that an output time window of the packet added to the ingress queue is determined. Therefore, according to the foregoing solution in which the ingress queue of the flow is dynamically determined, each received packet may be arranged in a specific time window for sending.


In the foregoing embodiment, the enqueue timing and the output timing are defined, and the flow state information is recorded, therefore, a packet of a time window may be flexibly adjusted to an ingress queue of a next time window. This breaks a constraint of statically configuring an ingress queue on a sending time of an upstream device. The upstream device can send delay-sensitive traffic nearly in an entire time window, thereby increasing available bandwidth for delay-sensitive traffic and reducing resource waste of bandwidth.


In addition, in the foregoing embodiment, an ingress queue of an Mth time window is opened for sending in an (M+1)th time window. Therefore, according to the foregoing solution, a packet received by the network device in a local Nth time window is sent in a local (N+1)th or (N+2)th) time window. This can ensure that a maximum value of an end-to-end delay is a sum of ((End-to-end hop count+1)×(Time window size))+Time window boundary difference on a path. Therefore, in this embodiment of this application, available bandwidth for delay-sensitive traffic can be increased, and it can also be ensured that delay-sensitive traffic has a committed end-to-end delay. In addition, according to the solution provided in this embodiment of this application, time windows in an entire network do not need to be aligned. This solution may be deployed on a device not supporting time alignment, thereby extending applicability.


With reference to FIG. 6, the following describes in detail specific implementations of step 5d shown in FIG. 5.



FIG. 6 is a flowchart of a method for determining an ingress queue according to Embodiment 1. The method includes the following steps.


S601. The network device determines whether the arrival time window of the packet and an arrival time window of a previous packet are a same time window. If the two arrival time windows are the same time window, step S602 is performed, or if the two arrival time windows are not the same time window, step S606 is performed.


S602. The network device determines whether the packet count in the traffic resource reservation information is equal to the quantity of packets that can be sent in one time window. If the packet count is equal to the quantity of packets that can be sent, it is determined that the ingress queue of the flow is the alternative ingress queue of the arrival time window, and S603 to S605 are performed, or if the packet count is not equal to the quantity of packets that can be sent, it is determined that the ingress queue of the flow is the ingress queue of the arrival time window, and S604 and S605 are performed.


In this embodiment, an example is used. In the example, the initial value of the packet count in the traffic resource reservation information is 0, and accumulation is performed in a manner of increasing the count by 1 each time a packet is added. In this manner, the packet count in the traffic resource reservation information may be directly compared with the traffic characteristic of the flow (namely, the quantity of packets that can be sent for the flow in one time window).


This embodiment is described using an example in which an alternative ingress queue is configured in an enqueue timing table.


S603. The network device updates the ingress queue recorded in the traffic resource reservation information to an alternative ingress queue, in an enqueue timing table, of the arrival time window of the packet, and clears the packet count in the traffic resource reservation information.


S604. Add the packet to the ingress queue based on the ingress queue recorded in the traffic resource reservation information.


It should be noted that, in this embodiment of this application, there are two implementations of adding the packet to the ingress queue determined in S602. Manner 1 comprises the ingress queue recorded in the traffic resource reservation information is updated first, and then enqueue is performed based on a recorded ingress queue. It should be noted that enqueue is directly performed based on the recorded ingress queue if no update is required (that is, the determined ingress queue remains unchanged relative to the recorded ingress queue). Manner 2 comprises enqueue is performed directly based on the determined ingress queue. According to the manner of directly performing enqueue based on the determined ingress queue, the operation of updating the ingress queue recorded in the traffic resource reservation information may be performed before or after the enqueue. The enqueue and the operation are definitely performed in sequence.


This embodiment is described using the manner 1 as an example.


S605. The network device increases the packet count in the traffic resource reservation information by 1.


S606. The network device determines whether the arrival time window of the packet is a next time window of an arrival time window of a previous packet. If the arrival time window of the packet is the next time window of the arrival time window of the previous packet, S607 is performed, or if the arrival time window of the packet is not the next time window of the arrival time window of the previous packet, S608 is performed.


If the arrival time window of the packet is the next time window of the arrival time window of the previous packet, it indicates that the packet is the first packet in the arrival time window of the packet. Step S606 is a branch of step S601 when a determining result of step S601 is negative. Therefore, if the arrival time window of the packet is not the next time window of the arrival time window of the previous packet, the arrival time window of the packet is a time window after the next time window of the arrival time window of the previous packet. In this case, the packet is also the first packet in the arrival time window of the packet. In other words, provided that the arrival time window of the packet and the arrival time window of the previous packet are not the same time window, the packet is the first packet in the arrival time window of the packet.


S607. Determine whether the ingress queue in the traffic resource reservation information is the ingress queue, in the enqueue timing, of the arrival time window of the packet. If the ingress queue in the traffic resource reservation information is the ingress queue in the enqueue timing, step S602 is performed, or if the ingress queue in the traffic resource reservation information is not the ingress queue in the enqueue timing, step S608 is performed.


If the ingress queue in the traffic resource reservation information is the ingress queue, in the enqueue timing, of the arrival time window of the packet, it indicates that an alternative ingress queue is used in a previous time window of the arrival time window of the packet. Alternatively, if the ingress queue in the traffic resource reservation information is not the ingress queue, in the enqueue timing, of the arrival time window of the packet, it indicates that no alternative ingress queue is used in a previous time window of the arrival time window of the packet.


S608. The network device updates the ingress queue recorded in the traffic resource reservation information to the ingress queue corresponding to the arrival time window, clears the packet count in the traffic resource reservation information, and then continues to perform S604 and S605.


Embodiment 2

In this embodiment, a network device also needs to perform queue resource reservation and traffic resource reservation.


A queue resource reservation manner in this embodiment is different from the queue resource reservation manner in Embodiment 1. In this embodiment, the network device configures a queue in a one-to-one correspondence with each piece of delay-sensitive traffic. In other words, in this embodiment, a queue is allocated based on a flow instead of a time window. Therefore, in this embodiment, there is no time window-based enqueue timing or output timing. In this embodiment, the network device further configures a dequeue gating for a queue of each flow. The dequeue gating is used to control a quantity of packets sent in each time window. Therefore, in this embodiment, queue resource reservation information includes a correspondence between the flow and the queue of the flow, and also includes the dequeue gating of the queue of each flow.


For a traffic resource reservation process in this embodiment, refer to the embodiment shown in FIG. 3. Details are not described herein again.


In this embodiment, the queue resource reservation may be performed together with the traffic resource reservation, or may be performed before the traffic resource reservation.


The network device performs packet transmission based on the information configured in the foregoing queue resource reservation and traffic resource reservation processes. FIG. 7 is a flowchart of a method for sending a packet by a network device according to Embodiment 2.



7
a to 7c are a packet enqueue process, and 7d is a packet dequeue process.



7
a. A network device receives a packet from an upstream device.



7
b. The network device identifies whether a flow to which the packet belongs is delay-sensitive traffic.


For a specific implementation of this step, refer to step 5b in the embodiment shown in FIG. 5. Details are not described herein again.


This embodiment is also described using an example in which the flow to which the packet belongs is delay-sensitive traffic. For a packet of non-delay sensitive traffic, the network device places the packet in another lower-priority queue. Enqueue and scheduling processes of the packet are implemented using other approaches. Details are not described in this embodiment of this application.



7
c. After identifying that the flow is delay-sensitive traffic, the network device adds the packet to a queue corresponding to the flow to which the packet belongs.



7
d. The network device extracts, based on a dequeue gating, the packet from the queue corresponding to the flow, and sends the packet.


Specifically, the network device checks the queue of the flow and the dequeue gating in real time, and when the dequeue gating is not 0 and the queue is not empty, extracts the packet from the queue, and sends the packet. In this embodiment of this application, the dequeue gating is updated based on a time window. An initial value of the dequeue gating in each time window is a quantity of packets that can be sent for the corresponding flow in one time window, and progressively decreases based on a quantity of packets sent in each time window.


In a specific implementation, the dequeue gating may be implemented using a token bucket. Updating the dequeue gating is to update a quantity of tokens in the token bucket.


Specifically, when the dequeue gating is implemented using the token bucket, an implementation process of step 7d is shown in FIG. 8.



FIG. 8 is a flowchart of scheduling packet dequeue. The process includes the following steps.


S801. The network device checks, in real time, a packet in a queue of each piece of delay-sensitive traffic and a token in a token bucket, determines whether there is any queue meeting a condition that the queue is not empty and there is a token in a token bucket of the queue. If there is such a queue, step S802 is performed to extract and send the packet, or if there is no such a queue, S803 is performed.


S802. The network device extracts the packet from the queue, sends the packet, and decreases a quantity of tokens in the token bucket by 1, and then the process returns to S801.


S803. The network device schedules a packet in a lower-priority queue, and sends the packet.


Step S803 may be implemented using other approaches. For example, a queue corresponding to non-delay sensitive traffic is scheduled, based on priority-based scheduling or polling-based scheduling, to perform dequeue and packet sending. Details are not described herein.


The following describes in detail a dequeue gating update process using an example in which the dequeue gating is a token bucket.



FIG. 9 is a flowchart of a token bucket update method. The method includes the following steps.


S901. The network device obtains a time window of an egress port used to transmit delay-sensitive traffic.


For a method for obtaining the time window of the egress port, refer to step 5a shown in FIG. 5. Details are not described herein again.


S902. The network device determines whether the time window is updated. If the time window is updated, S903 is performed, or if the time window is not updated, the process returns to S901.


The determining whether the time window is updated is based on a time window that exists when the token bucket is updated last time.


S903. The network device obtains a traffic characteristic (namely, a quantity of packets that can be sent for each flow in one time window) in traffic resource reservation information of each flow.


S904. The network device updates a quantity of tokens in a token bucket of a queue of each flow to the quantity of packets that can be sent for the flow in one time window. In this solution, the dequeue gating is updated based on the quantity of packets that can be sent for the flow in one time window, and a value of the dequeue gating implies an accumulated quantity of packets that have been sent in one time window. Therefore, in this embodiment, a packet is also arranged, based on the quantity of packets that can be sent for the flow in one time window and an accumulated packet sending status in one time window, in a specific time window for sending.


In this solution, a time window is not limited during enqueue, but packet sending in each time window is controlled in a dequeue process. Therefore, no requirement is imposed on a time at which a packet is received, and no constraint is imposed on a sending time of an upstream device. The upstream device can send delay-sensitive traffic nearly in an entire time window, thereby increasing available bandwidth for delay-sensitive traffic and reducing resource waste of bandwidth.


In addition, in the foregoing embodiment, a quantity of packets sent in each time window can be ensured. This can ensure that an end-to-end delay has a committed upper limit. Therefore, in this embodiment of this application, available bandwidth for delay-sensitive traffic can be increased, and it can also be ensured that the delay-sensitive traffic has a committed end-to-end delay. In addition, according to the solution provided in this embodiment of this application, time windows in an entire network do not need to be aligned. This solution may be deployed on a device not supporting time alignment, thereby extending applicability.


The following describes a network device 1000 and a network device 1100 in the embodiments shown in FIG. 1 to FIG. 9 with reference to accompanying drawings. The network device 1000 is applied to the embodiments shown in FIG. 2 to FIG. 6. The network device 1100 is applied to the embodiments shown in FIG. 2 and FIG. 3 and the embodiments shown in FIG. 7 to FIG. 9. The following provides detailed descriptions.



FIG. 10 is a schematic structural diagram of a network device 1000 according to an embodiment of this application. The network device 1000 includes a receiving module 1002 and a processing module 1004.


The receiving module 1002 is configured to receive a packet. For a detailed processing function of the receiving module 1002 or a step that can be performed by the receiving module 1002, refer to the detailed descriptions of 5a in the embodiment shown in FIG. 5.


The processing module 1004 is configured to identify that a flow to which the packet belongs is delay-sensitive traffic for a reserved resource, where the reserved resource includes a quantity of packets that can be sent for the flow in one time window, and arrange, based on the quantity of packets that can be sent for the flow in one time window, and an accumulated quantity of packets that have been sent in the time window, the packet in a specific time window for sending. For a detailed processing function of the processing module 1004 or a step that can be performed by the processing module 1004, refer to the detailed descriptions of 5b to 5h in the embodiment shown in FIG. 5, and the detailed descriptions of S601 to S608 in FIG. 6.


In a specific embodiment, the network device further includes a first storage module 1006. The first storage module 1006 is configured to store preconfigured queue resource reservation information and traffic resource reservation information that are used to send the flow. The queue resource reservation information includes the queue used to send the flow, and enqueue timing and output timing of the queue, where the enqueue timing is used to define an ingress queue of each time window, and the output timing is used to define an open/closed state of each queue in each time window. The traffic resource reservation information records a current ingress queue of the flow, and a packet count used to represent a quantity of packets in the current ingress queue. The accumulated packet sending status in the time window is the quantity of packets in the current ingress queue.


Correspondingly, that the processing module arranges, based on the quantity of packets that can be sent for the flow in one time window and an accumulated quantity of packets that have been sent in the time window, the packet in a specific time window for sending further includes determining, by the processing module, an arrival time window of the packet, where the arrival time window is a time window of the packet at an egress port of the network device when the packet arrives at the network device, querying for the quantity of packets in the current ingress queue, determining an ingress queue of the flow based on the arrival time window, the enqueue timing of the queue, the quantity of packets that can be sent for the flow in one time window, and the quantity of packets in the current ingress queue, adding the packet to the determined ingress queue of the flow, and opening, in a time window that is defined in the output timing and that is for opening a queue in which the packet is located, the queue in which the packet is located, and sending the packet.


For detailed content about the first storage module 1006, refer to the detailed descriptions of step S304 in the embodiment shown in FIG. 3, and the detailed descriptions of Table 1 to Table 3 and corresponding text parts of the tables.


For a detailed processing function of the processing module 1004 or a step that can be performed by the processing module 1004, refer to the detailed descriptions of 5b to 5h in the embodiment shown in FIG. 5, and the detailed descriptions of S601 to S608 in FIG. 6.


In a specific embodiment, the network device further includes a first resource reservation module 1008. The first resource reservation module 1008 is configured to reserve a resource for the flow in advance, and the traffic resource reservation information is configured in the resource reservation process.


For a detailed processing function of the first resource reservation module 1008 or a step that can be performed by the first resource reservation module 1008, refer to the detailed descriptions of step S304 in the embodiment shown in FIG. 3, and the detailed descriptions of Table 1 to Table 3 and corresponding text parts of the tables.



FIG. 11 is a schematic structural diagram of a network device 1100 according to an embodiment of this application. The network device 1100 includes a receiving module 1102 and a processing module 1104.


The receiving module 1102 is configured to receive a packet. For a detailed processing function of the receiving module 1102 or a step that can be performed by the receiving module 1102, refer to the detailed descriptions of 7a in the embodiment shown in FIG. 7.


The processing module 1104 is configured to identify that a flow to which the packet belongs is delay-sensitive traffic for a reserved resource, where the reserved resource includes a quantity of packets that can be sent for the flow in one time window, and arrange, based on the quantity of packets that can be sent for the flow in one time window, and an accumulated quantity of packets that have been sent in the time window, the packet in a specific time window for sending.


For a detailed processing function of the processing module 1104 or a step that can be performed by the processing module 1104, refer to the detailed descriptions of 7b to 7d in the embodiment shown in FIG. 7, and the detailed descriptions of FIG. 8 and FIG. 9.


In a specific embodiment, the network device further includes a second storage module 1106. The second storage module 1106 is configured to store preconfigured queue resource reservation information and traffic resource reservation information that are used to send the flow. The queue resource reservation information includes a queue in a one-to-one correspondence with the flow, and a dequeue gating configured for the queue, where the dequeue gating is used to control a quantity of packets sent in each time window. The traffic resource reservation information includes the quantity of packets that can be sent for the flow in one time window.


Correspondingly, that the processing module 1104 arranges, based on the quantity of packets that can be sent for the flow in one time window and an accumulated packet sending status in the time window, the packet in a specific time window for sending further includes adding, by the processing module 1104, the packet to a queue corresponding to the flow to which the packet belongs, and extracting, based on the dequeue gating, the packet from the queue corresponding to the flow, and sending the packet, where the dequeue gating is updated based on a time window, and an initial value of the dequeue gating in each time window is a quantity of packets that can be sent for the flow corresponding to the queue in one time window, and decreases progressively based on a quantity of packets sent in each time window.


For detailed content about the second storage module 1106, refer to the detailed descriptions of step S304 in the embodiment shown in FIG. 3.


For a detailed processing function of the processing module 1104 or a step that can be performed by the processing module 1104, refer to the detailed descriptions of 7b to 7d in the embodiment shown in FIG. 7, and the detailed descriptions of FIG. 8 and FIG. 9.


In a specific embodiment, the network device further includes a second resource reservation module 1108. The second resource reservation module 1108 is configured to reserve a resource for the flow in advance, and the traffic resource reservation information and the queue resource reservation information are configured in the resource reservation process.


For a detailed processing function of the second resource reservation module 1108 or a step that can be performed by the second storage module 1108, refer to the detailed descriptions of step S304 in the embodiment shown in FIG. 3, and the detailed descriptions in the embodiment shown in FIG. 9.


It should be understood that the described apparatus embodiments are merely examples. For example, the module division is merely logical function division and may be other division in an embodiment. For example, a plurality of modules, units, or components may be combined, or a module may be further divided into different function modules. For example, functions of the first resource reservation module in the network device in the foregoing embodiments may alternatively be combined with the processing module into one module. In addition, it should be noted that the couplings or communication connections between the modules or devices that are shown or described in the figures may be indirect couplings or communication connections implemented using some interfaces, apparatuses, or units. Alternatively, the couplings or communication connections may be implemented in electrical, mechanical, or other forms.


The modules described as separate parts may be physically separated, or may be physically in a same physical part. A part named as a module may be a hardware unit, may be a software module or a logic unit, or may be a combination of hardware and software. The module may be located in one network element, or may be distributed on a plurality of network elements. Some or all of the units may be selected according to actual requirements to achieve the objectives of the solutions of the embodiments.



FIG. 12 is a possible schematic structural diagram of a network device 1200 according to an embodiment of this application. The network device 1200 may be applied to the embodiments shown in FIG. 2 to FIG. 9. In this embodiment, functions or operation steps of the network device are implemented by a general-purpose computer or one or more processors in a server by executing program code in a memory. In this implementation, the network device 1200 includes a transceiver 1210, a processor 1220, a random access memory 1240, a read-only memory 1250, and a bus 1260.


The processor 1220 is coupled to the transceiver 1210, the random access memory 1240, and the read-only memory 1250 using the bus 1260.


The processor 1220 may be a general-purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits configured to control execution of programs of the solutions of the present disclosure.


The bus 1260 may include a path that transmits information among the foregoing components.


The transceiver 1210 is configured to communicate with another device or a communications network, such as Ethernet, a radio access network (RAN), or a wireless local area network (WLAN). In this embodiment of the present disclosure, the transceiver 1210 may be configured to communicate with a network control plane, a source-end device, or another network device.


In a specific implementation, the random access memory 1240 may load application program code that implements the network device in the embodiments shown in FIG. 2 to FIG. 6, and the processor 1220 controls execution of the application program code.


In another specific implementation, the random access memory 1240 may load application program code that implements the network device in the embodiments shown in FIG. 2 and FIG. 3 and the embodiments shown in FIG. 7 to FIG. 9, and the processor 1220 controls execution of the application program code.


When the network device 1200 needs to run, a basic input/output system solidified in the read-only memory 1250 or a bootloader boot system in an embedded system is used to start the network device 1200, and guide the network device 1200 into a normal running state. After the network device 1200 enters a normal running state, the processor 1220 runs an application program and an operating system in the random access memory 1240 such that the network device 1200 may separately perform functions and operations in the embodiments shown in FIG. 2 to FIG. 6, or functions and operations in the embodiments shown in FIG. 2 and FIG. 3 and the embodiments shown in FIG. 7 to FIG. 9.


The transceiver 1210 interacts with the network control plane, the other network device, or the source-end device under control of the processor 1220. Internal processing of the network device 1200 is performed by the processor 1220.


It should be noted that, in the implementation, in addition to the foregoing conventional manners such as the manner of executing a program code instruction in the memory by the processor, a virtual network device may be implemented based on a physical server in combination with a network functions virtualization (NFV) technology in this implementation. The virtual network device may be a virtual switch, a router, or another forwarding device. By reading this application, persons skilled in the art may virtualize, on a physical server in combination with the NFV technology, a plurality of network devices having the foregoing functions. Details are not described herein.


An embodiment of the present disclosure further provides a computer-readable storage medium configured to store a computer software instruction used by the foregoing network device. The computer software instruction includes a program for performing functions of the network device in the embodiments shown in FIG. 2 to FIG. 6.


An embodiment of the present disclosure further provides another computer-readable storage medium configured to store a computer software instruction used by the foregoing network device. The computer software instruction includes a program for performing functions of the network device in the embodiments shown in FIG. 2 and FIG. 3 and the embodiments shown in FIG. 7 to FIG. 9.


Although the present disclosure is described with reference to the embodiments, in a process of implementing the present disclosure that claims protection, persons skilled in the art may understand and implement another variation of the disclosed embodiments by viewing the accompanying drawings, disclosed content, and the appended claims. In the claims, “comprising” does not exclude another component or another step, and “a” or “one” does not exclude a case of plurality. A single processor or another unit may implement functions enumerated in the claims. Some measures are recorded in dependent claims that are different from each other, but this does not mean that these measures cannot be combined to produce a desirable effect.


Persons skilled in the art should understand that the embodiments of this application may be provided as a method, an apparatus (device), or a computer program product. Therefore, the embodiments of this application may be hardware only embodiments, software only embodiments, or hardware and software combined embodiments. Moreover, this application may use a form of a computer program product that is implemented on one or more computer-usable storage media (including but not limited to a disk memory, a compact disc-read only memory (CD-ROM), an optical memory, and the like) that include computer usable program code. The computer program is stored/distributed in a proper medium and is provided together with other hardware or used as a part of hardware, or may also use another distribution form, such as using the Internet or another wired or wireless telecommunications system.


The present disclosure is described with reference to the flowcharts and/or block diagrams of the method, the apparatus (device), and the computer program product according to the embodiments of the present disclosure. It should be understood that computer program instructions may be used to implement each process and/or each block in the flowcharts and/or the block diagrams and a combination of a process and/or a block in the flowcharts and/or the block diagrams. These computer program instructions may be provided for a general-purpose computer, a special-purpose computer, an embedded processor, or a processor of any other programmable data processing device to generate a machine such that the instructions executed by a computer or a processor of any other programmable data processing device generate an apparatus for implementing a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.


These computer program instructions may also be stored in a computer readable memory that can instruct a computer or any other programmable data processing device to work in a specific manner such that the instructions stored in the computer readable memory generate an artifact that includes an instruction apparatus. The instruction apparatus implements a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams. These computer program instructions may also be loaded onto a computer or another programmable data processing device such that a series of operations and steps are performed on the computer or the other programmable device, thereby generating computer-implemented processing. Therefore, the instructions executed on the computer or the other programmable device provides steps for implementing a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.


Although the present disclosure is described with reference to specific features and the embodiments thereof, obviously, various modifications and combinations may be made to them without departing from the spirit and scope of the present disclosure. Correspondingly, the specification and accompanying drawings are merely example descriptions of the present disclosure defined by the appended claims, and are considered to have covered any of or all modifications, variations, combinations or equivalents that cover the scope of the present disclosure. Obviously, persons skilled in the art can make various modifications and variations to the present disclosure without departing from the spirit and scope of the present disclosure. The present disclosure is intended to cover these modifications and variations provided that they fall within the scope of the claims of the present disclosure and their equivalent technologies.

Claims
  • 1. A packet sending method, implemented by a network device, wherein the packet sending method comprises: receiving a packet;identifying that a flow to which the packet belongs is delay-sensitive traffic for a reserved resource, wherein the reserved resource comprises a quantity of packets that can be sent for the flow in a time window; andarranging the packet in a time window to send based on the quantity of packets that can be sent for the flow in the time window, an accumulated quantity of packets that have been sent in the time window, and a quantity of packets already in a queue used to send the flow.
  • 2. The packet sending method of claim 1, further comprising: using queue resource reservation information and traffic resource reservation information to send the flow, wherein the queue resource reservation information and the traffic resource reservation information are preconfigured in the network device, wherein the queue resource reservation information comprises a queue for sending the flow, enqueue timing of the queue, and output timing of the queue, wherein the enqueue timing defines an ingress queue of the time window, wherein the output timing defines an open and closed state of each queue in the time window, wherein the traffic resource reservation information records a current ingress queue of the flow and a packet count that represents a quantity of packets in the current ingress queue, and wherein an accumulated packet sending status in the time window is the quantity of packets in the current ingress queue;determining an arrival time window of the packet, wherein the arrival time window is at an egress port of the network device when the packet arrives at the network device;querying for the quantity of packets in the current ingress queue;determining an ingress queue of the flow based on the arrival time window, the enqueue timing of the queue, the quantity of packets that can be sent for the flow in one time window, and the quantity of packets in the current ingress queue;adding the packet to the ingress queue of the flow;opening the queue in which the packet is located in a time window that is defined in the output timing; andsending the packet.
  • 3. The packet sending method of claim 2, further comprising: determining an ingress queue of a next time window of the arrival time window as the ingress queue of the flow when the quantity of packets in the current ingress queue has reached the quantity of packets that can be sent in the time window, ordetermining an ingress queue of the arrival time window as the ingress queue of the flow when the quantity of packets in the current ingress queue has not reached the quantity of packets that can be sent in the time window.
  • 4. The packet sending method of claim 3, wherein after determining the ingress queue of the flow, the packet sending method further comprises: updating the current ingress queue of the traffic resource reservation information based on the ingress queue of the flow;restoring the packet count to an initial value each time the network device updates the current ingress queue; andaccumulating the packet count each time the network device adds the packet to the ingress queue of the flow.
  • 5. The packet sending method of claim 2, wherein the enqueue timing comprises a time window that has an alternative ingress queue, wherein an alternative ingress queue of the time window is an ingress queue of a next time window, and wherein the packet sending method further comprises: determining an alternative ingress queue of the arrival time window as the ingress queue of the flow when the quantity of packets in the current ingress queue has reached the quantity of packets that can be sent in the time window, ordetermining an ingress queue of the arrival time window as the ingress queue of the flow when the quantity of packets in the current ingress queue has not reached the quantity of packets that can be sent in the time window.
  • 6. The packet sending method of claim 2, wherein an ingress queue of a time window of the output timing is in an open state in a next time window and is in a closed state in another time window.
  • 7. The packet sending method of claim 1, further comprising reserving the reserved resource for the flow in advance, wherein traffic resource reservation information is used to send the flow and is configured in a resource reservation process.
  • 8. The packet sending method of claim 1, further comprising: using queue resource reservation information and traffic resource reservation information to send the flow, wherein the queue resource reservation information and the traffic resource reservation information are preconfigured in the network device, wherein the queue resource reservation information comprises a queue in a one-to-one correspondence with the flow and a dequeue gating configured for the queue, wherein the dequeue gating controls a quantity of packets to send in the time window, and wherein the traffic resource reservation information comprises the quantity of packets that can be sent for the flow in one time window;adding the packet to a queue corresponding to the flow to which the packet belongs; andextracting the packet from the queue corresponding to the flow based on the dequeue gating, wherein the dequeue gating updates based on the time window, wherein an initial value of the dequeue gating in the time window is a quantity of packets that can be sent for the flow corresponding to the queue in the time window, and wherein the initial value decreases progressively based on another quantity of packets sent in the time window; andsending the packet.
  • 9. The packet sending method of claim 8, further comprising: monitoring a time window update;obtaining the quantity of packets that can be sent for the flow in the time window from the traffic resource reservation information; andupdating the dequeue gating based on the quantity of packets that can be sent for the flow in the time window each time the time window updates.
  • 10. The packet sending method of claim 9, wherein the dequeue gating is a token bucket, and wherein the packet sending method further comprises updating a quantity of tokens in the token bucket to the quantity of packets that can be sent for the flow in the time window.
  • 11. The packet sending method of claim 10, further comprising: checking, in real time, whether another packet is in the queue corresponding to the flow and whether a token is in the token bucket; andextracting and sending the other packet when the other packet in the queue corresponds to the flow and the token is in the token bucket until no token is in the token bucket or no packet is in the queue corresponding to the flow.
  • 12. The packet sending method of claim 8, further comprising reserving the reserved resource for the flow in advance, wherein the traffic resource reservation information and the queue resource reservation information are configured in a resource reservation process.
  • 13. A network device, comprising: a receiver configured to receive a packet; anda processor coupled to the receiver and configured to: identify that a flow to which the packet belongs is delay-sensitive traffic for a reserved resource, wherein the reserved resource comprises a quantity of packets that can be sent for the flow in a time window; andarrange the packet in a specific time window for sending based on the quantity of packets that can be sent for the flow in the time window and an accumulated quantity of packets that have been sent in the time window.
  • 14. The network device of claim 13, wherein the network device further comprises a first memory coupled to the processor and configured to store preconfigured queue resource reservation information and traffic resource reservation information that are used to send the flow, wherein the preconfigured queue resource reservation information comprises the queue used to send the flow, enqueue timing of the queue, and output timing of the queue, wherein the enqueue timing defines an ingress queue of the time window, wherein the output timing defines an open and closed state of each queue in the time window, wherein the traffic resource reservation information records a current ingress queue of the flow, and a packet count that represents a quantity of packets in the current ingress queue, wherein an accumulated packet sending status in the time window is the quantity of packets in the current ingress queue, and wherein the processor is further configured to: determine an arrival time window of the packet, wherein the arrival time window is at an egress port of the network device when the packet arrives at the network device;query for the quantity of packets in the current ingress queue;determine an ingress queue of the flow based on the arrival time window, the enqueue timing of the queue, the quantity of packets that can be sent for the flow in one time window, and the quantity of packets in the current ingress queue;add the packet to the ingress queue of the flow;open the queue in which the packet is located in a time window that is defined in the output timing; andsend the packet.
  • 15. The network device of claim 14, wherein the processor is further configured to determine an ingress queue of a next time window of the arrival time window as the ingress queue of the flow when the quantity of packets in the current ingress queue has reached the quantity of packets that can be sent in the time window, or determine an ingress queue of the arrival time window as the ingress queue of the flow when the quantity of packets in the current ingress queue has not reached the quantity of packets that can be sent in the time window.
  • 16. The network device of claim 15, wherein after the processor determines the ingress queue of the flow, the processor is further configured to: update the current ingress queue of the traffic resource reservation information based on the ingress queue of the flow;restore the packet count to an initial value each time the processor updates the current ingress queue; andaccumulate the packet count each time the processor adds the packet to the ingress queue of the flow.
  • 17. The network device of claim 14, wherein the enqueue timing comprises a time window that has an alternative ingress queue, wherein the alternative ingress queue of the time window is an ingress queue of a next time window, wherein the processor is further configured to: determine an alternative ingress queue of the arrival time window as the ingress queue of the flow when the quantity of packets in the current ingress queue has reached the quantity of packets that can be sent in the time window; ordetermine an ingress queue of the arrival time window as the ingress queue of the flow when the quantity of packets in the current ingress queue has not reached the quantity of packets that can be sent in the time window.
  • 18. The network device of claim 14, wherein an ingress queue of a time window of the output timing is in an open state in a next time window and is in a closed state in another time window.
  • 19. The network device of claim 13, wherein the processor is further configured to reserve a resource for the flow in advance, and wherein traffic resource reservation information is used to send the flow and is configured in a resource reservation process.
  • 20. The network device of claim 13, wherein the network device further comprises a second memory coupled to the processor and configured to store preconfigured queue resource reservation information and traffic resource reservation information that are used to send the flow, wherein the preconfigured queue resource reservation information comprises a queue in a one-to-one correspondence with the flow and a dequeue gating configured for the queue, wherein the dequeue gating controls a quantity of packets sent in each time window, wherein the traffic resource reservation information comprises the quantity of packets that can be sent for the flow in one time window, and wherein the processor is further configured to: add the packet to a queue corresponding to the flow to which the packet belongs;extract the packet from the queue corresponding to the flow based on the dequeue gating, wherein the dequeue gating updates based on the time window, wherein an initial value of the dequeue gating in the time window is a quantity of packets that can be sent for the flow corresponding to the queue in the time window, and wherein the initial value decreases progressively based on another quantity of packets sent in the time window; andsend the packet.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of International Patent Application No. PCT/CN2017/120430 filed on Dec. 31, 2017, the disclosure of which is hereby incorporated by reference in its entirety.

Continuations (1)
Number Date Country
Parent PCT/CN2017/120430 Dec 2017 US
Child 16916580 US