1. Field of the Invention
Generally, this disclosure relates to network devices. More specifically, it relates to systems and methods associated with queuing and scheduling architecture of network appliances that are capable of supporting wired or wireless clients while using both internal and external packet memory.
2. Description of the Related Art
The Wireless Local Area Network (WLAN) market has experienced rapid growth in recent years, primarily driven by consumer demand for home networking. The next phase of the growth will likely come from the commercial segment comprising enterprises, service provider networks in public places (e.g., Hotspots, etc.), multi-tenant multi-dwelling units (MxUs) and small office home office (SOHO) networks. The worldwide market for the commercial segment is expected to grow from 5M units in 2001 to over 33M units this year, in 2006. However, this growth can be realized only if the issues of security, quality of service (QoS) and user experience are effectively addressed in newer products.
Unlike wired networks, as illustrated in
A typical implementation of a network device includes a packet memory to store the packets, and queues to organize the packets into an ordered list. Packets arriving at the device are typically stored in the packet memory while they wait to depart on the desired interface. Packets waiting for departure are organized using queues. Packets can be associated with different queues on the same interface, for example, based on their priority or class of service, and different scheduling mechanisms can be used to decide which queue or packet to serve first.
In typical architectures, queues are either implemented as static first-in, first-out (FIFO) queues, or dynamically organized into linked lists. In the former (i.e., FIFO queues), the packet memory is statically partitioned so that specific locations are associated with specific queues. In the latter (i.e., dynamic queues), packet memory locations are associated with different queues at different times based on demand. In conventional systems today, packet memory is implemented either entirely in internal memory or entirely in external memory, and typically never in a combination of the two. Likewise, queues in conventional systems are typically implemented entirely in either internal or external memory. As used herein, internal memory refers to any type of network device traffic storage that is integral to, e.g., on-chip with, the processing logic of the network device and external memory refers to any type of network device traffic storage that is not integral to, e.g., off-chip to, the processing logic.
The use of internal memory can result in a system with higher bandwidth, but the amount of packet buffer integrated on-chip can be limited by chip size and cost. On the other hand, using external memory can provide a large amount of packet storage, but at a lower memory throughput. Typical systems are built using either external or internal memory: external memory when it needs to handle large burst traffic conditions or mismatched link speed, or internal memory when it only needs to handle more ‘normal’ traffic conditions.
In the above mentioned unified network topology, as illustrated in
Enhanced memory management schemes are presented to extend the flexibility of using either internal or external packet memory within the same network device. In the proposed schemes, the user can choose either static or dynamic schemes, both or which are capable of using both internal and external memory, depending on the deployment scenario and applications. This gives the user flexible choices when building unified wired and wireless networks that are either low-cost or feature-rich, or a combination of both.
A method for buffering packets in a network device, and a network device including processing logic capable of performing the method are presented. The method includes initializing a plurality of output queues, determining to which of the plurality of output queues a packet arriving at the network device is destined, storing the packet in one or more buffers, where the one or more buffers is selected from a packet memory group including an internal packet memory and an external packet memory, and enqueuing the one or more buffers to the destined output queue.
Aspects and features of the present invention will become apparent to those ordinarily skilled in the art from the following detailed description of certain embodiments of the invention in conjunction with the accompanying drawings, wherein:
Embodiments will now be described in detail with reference to the drawings, which are provided as illustrative examples so as to enable those skilled in the art to practice the embodiments and are not meant to limit the scope of the disclosure. Where aspects of certain embodiments can be partially or fully implemented using known components or steps, only those portions of such known components or steps that are necessary for an understanding of the embodiments will be described, and detailed description of other portions of such known components or steps will be omitted so as not to make the disclosure overly lengthy or unclear. Further, certain embodiments are intended to encompass presently known and future equivalents to the components referred to herein by way of illustration.
In certain embodiments, packet memory can be used to implement a sophisticated, high-performance queuing architecture that can require external packet memory to operate. While the term packet will be used throughout this disclosure to illustrate certain embodiments, it is intended that such embodiments only exemplary, and that the teachings are equally applicable to any type of network traffic, such as datagrams, frames, and other similar data, regardless of the communications layer or layers in which the implementing network device is operating. This typically translates into additional system cost due to the memory chips and circuit board complexity. To reduce this limitation, certain embodiments introduce a novel micro-architecture for the network device that can selectively leverage both on-chip, or internal, packet memory and off-chip, or external, packet memory. Systems can be built according to certain embodiments without external memory to reduce the cost, or with limited external memory to increase performance for a subset of the traffic (e.g., burst traffic, mismatched link speeds, etc.) that may otherwise result in dropped packets if internal memory, when used alone, could not absorb the packets.
According to certain embodiments, there are at least four kinds of memory in the proposed implementation: packet memory, unicast pointer memory, multicast pointer memory, and queues. While the at least four kinds of memory are illustrated herein for completeness, certain embodiments can also be used within a unified device as disclosed in U.S. patent application Ser. No. 11/351,330, filed on Feb. 8, 2006 to Seshan et al. and entitled “Queuing and Scheduling Architecture for a Unified Access Device Supporting Wired and Wireless Clients,” which is fully incorporated herein by reference. Each of the at least four kinds of memory are briefly discussed below.
According to certain embodiments, two broad categories of configurations are disclosed to facilitate the use of internal and external packet memory in a network device: a static configuration and a dynamic configuration, which are not necessarily mutually exclusive of each other. Further, within each of these two broad categories, there are at least three kinds of queues: internal queues, external queues and aggregate queues. For an internal queue, all associated packet memory is internal memory, while for an external, all associated packet memory is external memory. However, for an aggregate queue, the associated packet memory can be both internal and external packet memory.
Generally, in a static configuration according to certain embodiments, each output queue can be pre-configured, or designated, for example during the initialization process, to be an internal queue, an external queue or an aggregate queue. Alternatively, to build a static system without external memory, all queues would be programmed to use internal memory. If external memory is available, a user can assign queues to use either internal memory, external memory or both, depending on the operational needs of the network device implementing the static configuration. For example, all of the output queues associated with wired traffic might be assigned to use internal memory, while queues handling wireless traffic could use external memory to facilitate buffering packets because of mismatched link speeds. Further, for handling multicast traffic or mirrored traffic, the aggregate queues could be used.
Generally, in a dynamic configuration according to certain embodiments, all of the output queues can be configured to dynamically and selectively use and alternate between both internal and external packet memory. For these queues, if both types of packet memory are available, internal memory can be used first. If there is no internal memory available, either because it does not exist of because it is currently full, external memory can be used. In this regard, external memory can serve as an “overflow buffer” during, for example, a burst-traffic condition. During a normal-traffic condition, all packets destined to dynamic queues can use internal memory.
For certain embodiments of the static configuration, three exemplary implementation schemes are presented. The use of one particular scheme over another depends on system requirements. In the first scheme, each physical port of the implementing network device can have two packet buffers, one internal 310 and one external 320, that are allocated and waiting for incoming packets. Internal packet buffer 310 is allocated from internal free queue 610 and external packet buffer 320 comes from external packet queue 615. Those packet buffers can serve as temporary storage for incoming packets while they are processed by the ingress pipeline of the network device. During the packet reception phase, each incoming packet can be stored in both internal and external packet buffer at the same time. The packet can be enqueued to egress queue once the forwarding decision is made by the ingress pipeline. Alternatively, for systems with only one type of packet memory (e.g., internal or external), all queues (and multicast memory) should be initialized to use that memory type.
In the above mentioned scheme, each packet can be stored both in internal memory and external memory. Once the information about the outgoing queue (i.e. internal, external, etc.) is available, then depending on the queue configuration, either the internal or external buffer can be discarded. For packets destined to internal queue the packet is stored both in internal and external memory. For these packets the bandwidth needed to write a packet to the external memory is wasted. Hence, the above scheme works best, but not exclusively, if the bandwidth to external memory is not limited as compared to the internal bandwidth.
In the second exemplary static configuration scheme, if the information about the destination queue is available before the packet data arrives, then the first exemplary scheme can be modified to store the packet directly into an internal or external buffer. In this way, the implementing network device can function even with very limited external memory. But this scheme does not handle the scenario where a burst of packets should be stored in the external memory as efficiently as the first scheme.
To handle the above drawbacks the following, third exemplary static configuration scheme can be used. A transfer queue consisting of a small number of internal packet buffers can be maintained. Each physical port of the implementing network device can have an internal packet buffers that is allocated and waiting for incoming packets. These packet buffers can serve as temporary storage for incoming packets as they are processed by the ingress pipeline of the network device. Based on packet forwarding logic or packet classification, the ingress pipeline can determine the appropriate egress queue. In case the egress queue is configured as an internal queue (i.e. the packets for this queue should be stored in the internal memory) the packet buffer can be directly linked to the egress queue. However, if the packet is destined for an external queue, then the packet buffer is first linked to the transfer queue. The packets are then transferred from the transfer queue to external memory, which is then linked to the egress queue.
For multicast and broadcast packets in each of these three exemplary static configuration schemes, the packet buffer needs to be enqueued to multiple queues. These queues can be configured internal, external or aggregate queues. If the outgoing queue 630 is configured to use internal memory, then the internal packet buffer 310 will be enqueued to the output queue. If the outgoing queue 630 is configured to use external memory, then external packet memory 320 will be enqueued to the output queue. In this exemplary static configuration, a multicast packet may consume two or more packet buffers if the multicast includes output queues using both internal and external packet memories 310, 320.
An alternative approach for handling multicast and broadcast packets would be through the use of aggregate queues. Here each multicast cast group or the broadcast group can be designated as internal or external. Thus, the multiple multicast or broadcast groups can be mapped to the same aggregate queue. If the group is set to internal then internal packet buffer 310 can be used, otherwise external packet memory 320 can be used for the group. In this way, only one copy of the multicast packet would need to be stored. As a simplification of this, all multicast and broadcast groups can be designated as internal or external.
Packets which are mirrored or copied can also be enqueued to multiple queues. The queue designated for forwarding the packet is referred to as a forwarding queue. If these queues are configured as either internal or external then as mentioned above either the internal or external packet buffer 310, 320 is used. An alternate mechanism would be to assign aggregate queues for mirrored or copied packets. Here the packet can be enqueued using the internal or external buffer based on the forwarding queue configuration.
For dynamic configuration, similar to static configuration discussed above, multicast or broadcast packets should be enqueued on multiple queues. It is possible that one of the queues can be designated dynamic, while other queues are designated internal. In such a situation, the multicast or broadcast packet can be, for example, preferably stored in the internal memory. However, in cases where all queues are dynamic, then the packet can be stored using either internal or external packet buffer based on system design. For example, a particular implementation could store these packets in external memory if the number of packets for any of the output queues is beyond some configured value. The decision to choose internal or external buffer could also be based on some predetermined configuration. However, if in a system it is possible that multiple dynamic queues would be full at the same time and the external memory bandwidth would not be sufficient to handle the burst traffic, then the transfer queue mechanism described above for static configuration can be used. These same implementations can be followed in the case of mirrored packets and/or copied packets.
In certain embodiments, both static and dynamic configurations can be applied to cell-based packet memory architecture. In cell-based packet memory architecture, a packet is stored in one or multiple memory cells. A cell can be physically located in either internal or external memory and a packet can be stored across multiple cells of both memory types. In a static configuration, each port of the network appliance should allocate to multiple cells, large enough to hold a single packet from either or both internal and external packet memory. A packet can be stored in either internal cells or external cells, depending on the outgoing queue. In a dynamic configuration, if there are not enough free internal memory cells to store an entire packet, external memory can be used so that a single packet need not be stored in a mix of internal and external cells. However, mixed storage can be accomplished using the cell-based packet memory architecture according to certain embodiments.
Although the present invention has been particularly described with reference to embodiments thereof, it should be readily apparent to those of ordinary skill in the art that various changes, modifications, substitutes and deletions are intended within the form and details thereof, without departing from the spirit and scope of the invention. Accordingly, it will be appreciated that in numerous instances some features of the invention will be employed without a corresponding use of other features. Further, those skilled in the art will understand that variations can be made in the number and arrangement of inventive elements illustrated and described in the above figures. It is intended that the scope of the appended claims include such changes and modifications.
This application claims the benefit of priority from U.S. Provisional Patent Application Ser. No. 60/703,114, filed Jul. 27, 2005 and entitled “Queuing and Scheduling Architecture Using Both Internal and External Packet Memory for Network Switching Devices,” which is fully incorporated herein by reference for all purposes.
Number | Date | Country | |
---|---|---|---|
60703114 | Jul 2005 | US |