This disclosure generally relates to systems and methods for data packet aggregation in packet buffers. In particular, this disclosure relates to systems and methods for aggregating multiple received packets in a single packet buffer to improve effective system bus and system memory throughput.
Network packet size distributions are typically random, making system bus and memory subsystem bandwidth calculations difficult in devices including a networking subsystem. For long streams of minimum-sized packets, a host CPU may be overwhelmed with interrupts, requiring interrupt moderation techniques to be deployed. For example, when back-to-back 65 byes (B) packets are received via the network subsystem where the system bus supports 64B transaction size, any 65B transaction is “broken” into two (a 64B transaction and a 1B transaction). The 1B transaction is arbitrated separately, thus incurring the same arbitration latency as the 64B transaction. Moreover, the network subsystem uses only a fraction of the system bus bandwidth, incurring a reduced effective throughput, a reduced effective memory throughput, an inefficient usage of memory space, and a significant number of system interrupts.
Various objects, aspects, features, and advantages of the disclosure will become more apparent and better understood by referring to the detailed description taken in conjunction with the accompanying drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements.
The details of various embodiments of the methods and systems are set forth in the accompanying drawings and the description below.
For purposes of reading the description of the various embodiments below, the following descriptions of the sections of the specification and their respective contents may be helpful:
Multiple packets may be aggregated within a single packet buffer, and multiple socket buffers (SKBs) may be generated, each pointing to a different starting location within the single buffer. Packets may be read from the packet buffer in transactions of a size to fully utilize the subsystem (e.g. 64B), with only the last transaction being of non-fully-utilized size (e.g., 1B). Such packet aggregation can improve system bus bandwidth utilization so that majority of transactions fully utilize an available system bus bandwidth, and thus can reduce the overall system bus latency. Memory write bandwidth utilization also can be improved so that majority of transactions have size of full memory (e.g., DRAM) burst length. Moreover, by aggregating smaller packets in a packet buffer, the operating system can allocate fewer buffers than without aggregation, thereby more efficiently using the available system DRAM space. Furthermore, such packet aggregation can allow the system interrupt to be generated based on the number of consumed packet buffers instead of that of received packets, thereby greatly reducing the total number of RX interrupts. In some implementations, the receive-side packet aggregation uses standard Operating System (OS) methods and functions for allocating/deallocating memory and handling packets; accordingly, in such implementations, there is no need for to modify the OS kernel.
The network interface 120 may store, in the packet buffer 180, multiple packets 184 (e.g., packet “0”, packet “1”, . . . , packet “N−1”, where N denotes the number of packets stored in the packet buffer 180), a total size of which is smaller than the predetermined amount of memory (in other implementations in which packets are counted starting at 1, the packets 184 may be identified as packet “1” through packet “N”).
The network interface 120 may also generate a status record, called a receive status block (RSB) 182, for each received packet 184 stored in the packet buffer 180, and store the generated RSBs in the packet buffer 180. Each RSB 182 for a corresponding received packet may include a predetermined bit (e.g., a “more” or “M” bit) which is set to a first value, e.g., “1”, to indicate the presence of a packet stored at a location in the packet buffer 180 following that packet (indicating “more” packets exist in the buffer), or set to a second value, e.g., “0”, to indicate the absence of such next packet. For example, as shown in the example of
As a new packet arrives, the network interface 120 may store a new RSB corresponding to the packet by counting bytes to identify where is a boundary of the packet buffer. Iteratively for each received packet, the network interface 120 may add the packet length identified in each RSB stored in the packet buffer 180 until reaching an RSB having the “M” bit set to the second value, e.g., “0”, and store the received packet in the packet buffer 180 at a start location equal to the added packet lengths. The network interface 120 may set the “M” bit of the previously last RSB to the first value, e.g., “1”, store a new RSB corresponding to the received packet in the packet buffer 180, and set the “M” bit of the new RSB to “0”. In this way, the network interface 120 may store multiple packets 184 in the packet buffer 180, such that the total size of packets and their corresponding RSBs in the packet buffer 180 is smaller than the predetermined amount of memory, e.g., 2 KB, and the size of unused space 186 in the packet buffer 180 is the difference between the total size and the predetermined amount of memory.
Referring to
With reference to
The socket buffers 160 may include multiple socket buffers (e.g., SKB “0”, SKB “1”, . . . , SKB “N−1”), each of which has location information (e.g., memory address) 162 of a corresponding packet 184 stored in the packet buffer 180. For example, referring to
With the configuration of
Referring to
According to the third embodiment, the receive side packet aggregation system 100 (see
The network driver 15 (see
Referring to
The network driver 15 may increment a value of the reference counter 185 for each packet stored within the contiguous region of memory. The network driver 15 may generate corresponding multiple SKBs, each of which can identify a packet of the multiple packets and a storage location of that packet within the contiguous region of memory. The network driver 15 may process a packet of the multiple packets stored within the contiguous region of memory. The networking protocol stack may decrement the value of the reference counter 185, responsive to processing the packet. The networking protocol stack may determine that the value of the reference counter 185 is equal to a predetermined value (e.g., “0”) and may request the operating system to de-allocate the contiguous region of memory, responsive to the determination.
At step S1000, the network driver 15 running on the processor 12 may allocate packet buffers 180. Before the initial allocation of a packet buffer occurs, the consumer index 146 and the producer index 148 may be set to the same location of an RX descriptor of the descriptor ring 140. The network driver 15 of the device 10 may request, from the operating system executed by the processor 12 of the device 10, allocation of a predetermined amount of memory of the device 10 for creating a single packet buffer 180. For example, the predetermined amount of memory for a single packet buffer 180 may be 2 KB; however the predetermined amount of memory according to the present disclosure is not limited thereto. Alternatively, the network driver 15 may request allocation of memory for creating multiple packet buffers 180 based on the predetermined amount of memory. The network interface 120 may receive, from the operating system, an identification (e.g., location information) of the allocated memory. In many implementations, the operating system may not know or may be agnostic to the receive side packet aggregation, and may utilize standard procedures for allocating a single packet buffer. In such implementations, the operating system need not be modified, as the network interface and network driver perform all steps necessary for aggregation.
At step S2000, the receive side packet aggregation system 100 may receive packets and store them in a single packet buffer or multiple packet buffers of the allocated memory. The network interface 120 may store a first received packet (e.g., packet “0” in
The first RSB may include a predetermined bit (e.g., “M” bit indicating “more” bit, see
Packets may be written to the packet buffer 180 using direct memory access 14, in some implementations, or via system bus 11. The network driver of the operating system may then read the packets directly from memory. For a reduced number of transactions, a portion of the first packet and a portion of the second packet may be provided to the system bus 11, such that the total size of the portions is equal to a transaction size (e.g., 64B) corresponding to a predetermined maximum throughput of the system bus 11. With this packet provision scheme, if there is close to 2 KB of data available in each packet buffer, a majority of transactions over the system bus 11 to write into the memory can fully utilize a system bandwidth. For example, when 65B back-to-back packets are received and the system bus has 64B transaction size for a maximum throughput, 28 packets will be stored within a 2 KB packet buffer, resulting in 31 fully-utilized transactions (i.e., 31 transactions of 64B size) and 1 non-fully-utilized transaction (i.e., one transaction of 32B size), thereby significantly reducing system inefficiencies. Specifically, in some embodiments, memory write bandwidth utilization can be improved so that majority of transactions have size of full memory (e.g., DRAM) burst length. For example, in case of 65B packets, 32-bit (or 4B) wide DRAM interface and DRAM burst length of 8, without packet aggregation, there will be two fully-utilized transactions (4B*8=32B) and one non-fully-utilized (1B) transaction, for each packet, leading to 56 (28*2) fully-utilized transactions and 28 non-fully-utilized transactions, for 28 packets. On the other hand, when the packet aggregation system is deployed, there will be 56 (28*2) fully-utilized transactions and only one non-fully-utilized transaction using a 2 KB packet buffer, in implementations in which one packet is 71B long (65B packet+4B RSB+2B prepended to packet to align IP protocol headers to 32-bit boundary=71B) and the 2 KB packet buffer is filled up with 28 packets (Integer of 2048B/71B=28 packets). By aggregating smaller packets in a packet buffer, the operating system can allocate fewer buffers than without aggregation, thereby more efficiently using the available system DRAM space. Furthermore, system design constraints of a DRAM controller can be relaxed because the DRAM controller does not need to grant memory access for every single packet.
In some implementations, to accomplish packet aggregation, the network interface may first create a packet aggregate structure in its local memory, corresponding to packet buffer 180, for aggregating received packets. The network interface may then move data from its local memory into the system memory 13 (e.g. via DMA), using optimally sized system bus transactions (e.g. 64B) as discussed above.
At step S3000, when each of one or more packet buffers is filled with one or more packets, the producer index 148 may be incremented to have the memory address of an RX descriptor which has the memory address 142 of the corresponding packet buffer 180. Alternatively, a push timer may handle stalled packets in the case where less than 2 KB is stored and a predetermined period of RX network inactivity has elapsed. That is, the producer index 148 may be incremented after a lapse of the predetermined period of RX network inactivity.
At step S4000, an interrupt to be serviced by the network driver 15 running on the processor 12 may be generated to initiate the processing of packet buffers located by RX descriptors corresponding to a location range defined by the consumer index 146 and the producer index 148 (e.g., RX descriptors (J+1) to (K−2) in
At step S5000, the network driver 15 may start processing of the packet buffers located by the RX descriptors in the location range defined by the two indexes.
At step S6000, the network driver 15 may read the RX descriptors in the location range and process packets from a packet buffer located by the RX descriptor pointed to by the consumer index 146. The process of step S6000 will be described in detail below with reference to
At step S7000, after processing all packets from the packet buffer corresponding to the previous RX descriptor, the consumer index 146 (see
At S6100, the network driver 15 may read a first RSB using the RX descriptor pointed by the consumer index 146 or read a next RSB using the previously read RSB. The first RSB may be located using the memory address (e.g., 142 or 144 in
At S6200, a socket buffer 160 may be created corresponding to the RSB read at step S6100 to have the memory address of the packet corresponding to the RSB. A value of a counter (e.g., the reference counter 185 in
At step S6300, the network driver 15 may pass or enqueue the SKB created at step S6200 into a networking protocol stack queue. The enqueued SKBs and their corresponding packets will be processed later and asynchronously by the networking protocol stack in the OS kernel (see step S6400 below with reference to
At step S6600, if the previously read RSB has the “M” bit set to the second value, e.g., “0”, the control may be transferred to step S7000 (see
At step S6400, the networking protocol stack may asynchronously process packets using the corresponding SKBs.
At step S6520, after a packet is processed by the networking protocol stack, the SKB corresponding to the packet may be de-allocated asynchronously by the networking protocol stack. At step S6540, the reference counter 185 (see
At step S6560, it may be determined whether the reference count 185 of the packet buffer 180 equals to zero. At step S6580, when it is determined that the reference count 185 of the packet buffer 180 equals to zero, the packet buffer 180 may be de-allocated. The network protocol stack may provide to the operating system an identification that the allocated memory for the packet buffer 180 may be de-allocated. Alternatively, the packet buffer may be reused without de-allocation for storing and processing another set of packets.
In some embodiments, system bus bandwidth utilization can be improved so that majority of transactions fully utilize an available system bus bandwidth, and the overall system bus latency can be reduced. In the example of 65B packets and 64B system bus transactions, when using 2 KB packet buffers, the total number of transaction can be 32, where 31 64B transactions fully use the 64B transaction size, and only one transaction, e.g., the 32nd transaction, does not fully use the 64B transaction size.
In some embodiments, memory write bandwidth utilization can be improved so that majority of transactions have size of full memory (e.g., DRAM) burst length. For example, in case of 65B packets, 32-bit (or 4B) wide DRAM interface and DRAM burst length of 8, without packet aggregation, there will be two fully-utilized transactions (4B*8=32B) and one non-fully-utilized (1B) transaction, for each packet, leading to 56 (28*2) fully-utilized transactions and 28 non-fully-utilized transactions, for 28 packets. On the other hand, when the packet aggregation system is deployed, there will be 56 (28*2) fully-utilized transactions and only one non-fully-utilized transaction using a 2 KB packet buffer, in implementations in which one packet is 71B long (65B packet+4B RSB+2B prepended to packet to align IP protocol headers to 32-bit boundary=71B) and the 2 KB packet buffer is filled up with 28 packets (Integer of 2048B/71B=28 packets). Moreover, by aggregating smaller packets in a packet buffer, the operating system can allocate fewer buffers than without aggregation, thereby more efficiently using the available system DRAM space. Furthermore, system design constraints of a DRAM controller can be relaxed because the DRAM controller does not need to grant memory access for every single packet.
In some embodiments, packets can be aggregated in 2 KB collections of memory (e.g. packet buffers), and the system interrupt can be generated based on the number of consumed packet buffers instead of that of received packets, thereby greatly reducing the total number of RX interrupts. For example, 65B back-to-back packets may result in 1.47M interrupts per second on a 1 Gb/s link if the interrupt is generated for every packet, while the packet aggregation system can reduce the total number of interrupts up to 52,500 (=1.47M/28) interrupts per second because the interrupt is generated for every 28 packets stored in each packet buffer. Furthermore, by using a simple reference counter, interrupt moderation can further reduce the number of interrupts.
In one aspect, the present disclosure describes a method for enhanced resource utilization. The method includes requesting, by a network driver of a device from an operating system executed by a processor of the device, allocation of a predetermined amount of memory of the device; receiving, by the network driver from the operating system, an identification of the allocated memory; storing, by a network interface, a first received packet in the allocated memory, the first received packet smaller than the predetermined amount of memory; and storing, by the network interface a second received packet in the allocated memory, the second received packet smaller than a difference between the predetermined amount of memory and a size of the first received packet.
In some implementations, the method further includes generating, by the network interface, a first status record for the first received packet and a second status record for the second received packet; and storing, by the network interface, the generated first and second status records in the allocated memory. In some implementations, the first status record further includes a predetermined bit set to a first value to indicate the presence of the second packet in the allocated memory. In other implementations, the first status record further includes a length of the first received packet and the second status record further includes a length of the second received packet.
In some implementations, the method further includes determining, by the network interface, to store a third received packet in the allocated memory; adding, by the network interface, each packet length identified in a status record stored in the allocated memory until reaching a status record having the predetermined bit set to a second value to indicate an absence of any further packets in the allocated memory; and storing the third received packet in the allocated memory at a start location equal to the added packet lengths.
In some implementations, the method further includes incrementing, by the network driver, a value of a reference counter.
In some implementations, the method further includes reading the first packet from the allocated memory; and decrementing the value of the counter, responsive to processing of the first packet.
In some implementations, the method includes determining that the value of the counter indicates that all packets stored in the allocated memory have been read; and providing to the operating system an identification that the allocated memory may be de-allocated.
In some implementations, a system bus of the device has a predetermined maximum throughput. In such implementations, the method further includes providing to the system bus, by the network interface, a portion of the first received packet and a portion of the second packet, the total size of the portions equal to a maximum transaction size corresponding to the predetermined maximum throughput.
In another aspect, the present disclosure describes a system for enhanced resource utilization. The system includes a network interface with access to memory of a device, in communication with an operating system of the device. The network interface is configured for receiving, by a network driver of the device and from the operating system, an identification of a predetermined amount of the memory for a packet buffer, storing a plurality of packets in the allocated memory, a total size of the plurality of packets smaller than or equal to the predetermined amount of memory, generating a status record for each received packet stored in the allocated memory, and storing the generated status records in the allocated memory.
In some implementations, each status record for each received packet comprises a predetermined bit set to a first value to indicate the presence of another packet stored at a location in the allocated memory following said received packet, or set to a second value to indicate the absence of said another packet.
In some implementations, each status record for each received packet further includes an identification of a length of said received packet.
In some implementations, iteratively for each received packet, the network interface is further configured to add the packet length identified in each status record stored in the allocated memory until reaching a status record having the predetermined bit set to the second value, and store said received packet in the allocated memory at a start location equal to the added packet lengths. In some implementations, the system further includes a counter, and wherein the network driver is further configured to increment a value of the counter.
The following IEEE standard(s), including any draft versions of such standard(s), are hereby incorporated herein by reference in their entirety and are made part of the present disclosure for all purposes: IEEE P802.3™; IEEE P802.11n™; and IEEE P802.11ac™. Although this disclosure may reference aspects of these standard(s), the disclosure is in no way limited by these standard(s).
Having discussed specific embodiments of the present solution, it may be helpful to describe aspects of the operating environment as well as associated system components (e.g., hardware elements) in connection with the methods and systems described herein. Referring to
The access points (APs) 806 may be operably coupled to the network hardware 892 via local area network connections. The network hardware 892, which may include a router, gateway, switch, bridge, modem, system controller, appliance, etc., may provide a local area network connection for the communication system. Each of the access points 806 may have an associated antenna or an antenna array to communicate with the wireless communication devices 802 in its area via wireless local area network connections. The wireless communication devices 802 may register with a particular access point 806 to receive services from the communication system (e.g., via a SU-MIMO or MU-MIMO configuration). For direct connections (e.g., point-to-point communications), some wireless communication devices 802 may communicate directly via an allocated channel and communications protocol. Some of the wireless communication devices 802 may be mobile or relatively static with respect to the access point 806.
In some embodiments an access point 806 includes a device or module (including a combination of hardware and software) that allows wireless communication devices 802 to connect to a wired network using Wi-Fi, or other standards. An access point 806 may sometimes be referred to as an wireless access point (WAP). An access point 806 may be configured, designed and/or built for operating in a wireless local area network (WLAN). An access point 806 may connect to a router (e.g., via a wired network) as a standalone device in some embodiments. In other embodiments, an access point can be a component of a router. An access point 806 can provide multiple devices 802 access to a network. An access point 806 may, for example, connect to a wired Ethernet connection and provide wireless connections using radio frequency links for other devices 802 to utilize that wired connection. An access point 806 may be built and/or configured to support a standard for sending and receiving data using one or more radio frequencies. Those standards, and the frequencies they use may be defined by the IEEE (e.g., IEEE 802.11 standards). An access point may be configured and/or used to support public Internet hotspots, and/or on an internal network to extend the network's Wi-Fi signal range.
In some embodiments, the access points 806 may be used for (e.g., in-home or in-building) wireless networks (e.g., IEEE 802.11, Bluetooth, ZigBee, any other type of radio frequency based network protocol and/or variations thereof). Each of the wireless communication devices 802 may include a built-in radio and/or is coupled to a radio. Such wireless communication devices 802 and/or access points 806 may operate in accordance with the various aspects of the disclosure as presented herein to enhance performance, reduce costs and/or size, and/or enhance broadband applications. Each wireless communication devices 802 may have the capacity to function as a client node seeking access to resources (e.g., data, and connection to networked nodes such as servers) via one or more access points 806.
The network connections may include any type and/or form of network and may include any of the following: a point-to-point network, a broadcast network, a telecommunications network, a data communication network, a computer network. The topology of the network may be a bus, star, or ring network topology. The network may be of any such network topology as known to those ordinarily skilled in the art capable of supporting the operations described herein. In some embodiments, different types of data may be transmitted via different protocols. In other embodiments, the same types of data may be transmitted via different protocols.
The communications device(s) 802 and access point(s) 806 may be deployed as and/or executed on any type and form of computing device, such as a computer, network device or appliance capable of communicating on any type and form of network and performing the operations described herein.
The central processing unit 821 is any logic circuitry that responds to and processes instructions fetched from the main memory unit 822. In many embodiments, the central processing unit 821 is provided by a microprocessor unit, such as: those manufactured by Intel Corporation of Mountain View, Calif.; those manufactured by International Business Machines of White Plains, N.Y.; or those manufactured by Advanced Micro Devices of Sunnyvale, Calif. The computing device 800 may be based on any of these processors, or any other processor capable of operating as described herein.
Main memory unit 822 may be one or more memory chips capable of storing data and allowing any storage location to be directly accessed by the microprocessor 821, such as any type or variant of Static random access memory (SRAM), Dynamic random access memory (DRAM), Ferroelectric RAM (FRAM), NAND Flash, NOR Flash and Solid State Drives (SSD). The main memory 822 may be based on any of the above described memory chips, or any other available memory chips capable of operating as described herein. In the embodiment shown in
A wide variety of I/O devices 830a-830n may be present in the computing device 800. Input devices include keyboards, mice, trackpads, trackballs, microphones, dials, touch pads, touch screen, and drawing tablets. Output devices include video displays, speakers, inkjet printers, laser printers, projectors and dye-sublimation printers. The I/O devices may be controlled by an I/O controller 823 as shown in
Referring again to
Furthermore, the computing device 800 may include a network interface 818 to interface to the network 804 through a variety of connections including, but not limited to, standard telephone lines, LAN or WAN links (e.g., 802.11, T1, T3, 56 kb, X.25, SNA, DECNET), broadband connections (e.g., ISDN, Frame Relay, ATM, Gigabit Ethernet, Ethernet-over-SONET), wireless connections, or some combination of any or all of the above. Connections can be established using a variety of communication protocols (e.g., TCP/IP, IPX, SPX, NetBIOS, Ethernet, ARCNET, SONET, SDH, Fiber Distributed Data Interface (FDDI), RS232, IEEE 802.11, IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n, IEEE 802.11ac, IEEE 802.11ad, CDMA, GSM, WiMax and direct asynchronous connections). In one embodiment, the computing device 800 communicates with other computing devices 800′ via any type and/or form of gateway or tunneling protocol such as Secure Socket Layer (SSL) or Transport Layer Security (TLS). The network interface 818 may include a built-in network adapter, network interface card, PCMCIA network card, card bus network adapter, wireless network adapter, USB network adapter, modem or any other device suitable for interfacing the computing device 800 to any type of network capable of communication and performing the operations described herein.
In some embodiments, the computing device 800 may include or be connected to one or more display devices 824a-824n. As such, any of the I/O devices 830a-830n and/or the I/O controller 823 may include any type and/or form of suitable hardware, software, or combination of hardware and software to support, enable or provide for the connection and use of the display device(s) 824a-824n by the computing device 800. For example, the computing device 800 may include any type and/or form of video adapter, video card, driver, and/or library to interface, communicate, connect or otherwise use the display device(s) 824a-824n. In one embodiment, a video adapter may include multiple connectors to interface to the display device(s) 824a-824n. In other embodiments, the computing device 800 may include multiple video adapters, with each video adapter connected to the display device(s) 824a-824n. In some embodiments, any portion of the operating system of the computing device 800 may be configured for using multiple displays 824a-824n. One ordinarily skilled in the art will recognize and appreciate the various ways and embodiments that a computing device 800 may be configured to have one or more display devices 824a-824n.
In further embodiments, an I/O device 830 may be a bridge between the system bus 850 and an external communication bus, such as a USB bus, an Apple Desktop Bus, an RS-232 serial connection, a SCSI bus, a FireWire bus, a FireWire 800 bus, an Ethernet bus, an AppleTalk bus, a Gigabit Ethernet bus, an Asynchronous Transfer Mode bus, a FibreChannel bus, a Serial Attached small computer system interface bus, a USB connection, or a HDMI bus.
A computing device 800 of the sort depicted in
The computer system 800 can be any workstation, telephone, desktop computer, laptop or notebook computer, server, handheld computer, mobile telephone or other portable telecommunications device, media playing device, a gaming system, mobile computing device, or any other type and/or form of computing, telecommunications or media device that is capable of communication. The computer system 800 has sufficient processor power and memory capacity to perform the operations described herein.
In some embodiments, the computing device 800 may have different processors, operating systems, and input devices consistent with the device. For example, in one embodiment, the computing device 800 is a smart phone, mobile device, tablet or personal digital assistant. In still other embodiments, the computing device 800 is an Android-based mobile device, an iPhone smart phone manufactured by Apple Computer of Cupertino, Calif., or a Blackberry or WebOS-based handheld device or smart phone, such as the devices manufactured by Research In Motion Limited. Moreover, the computing device 800 can be any workstation, desktop computer, laptop or notebook computer, server, handheld computer, mobile telephone, any other computer, or other form of computing or telecommunications device that is capable of communication and that has sufficient processor power and memory capacity to perform the operations described herein.
Although the disclosure may reference one or more “users”, such “users” may refer to user-associated devices or stations (STAs), for example, consistent with the terms “user” and “multi-user” typically used in the context of a multi-user multiple-input and multiple-output (MU-MIMO) environment.
Although examples of communications systems described above may include devices and APs operating according to an 802.11 standard, it should be understood that embodiments of the systems and methods described can operate according to other standards and use wireless communications devices other than devices configured as devices and APs. For example, multiple-unit communication interfaces associated with cellular networks, satellite communications, vehicle communication networks, and other non-802.11 wireless networks can utilize the systems and methods described herein to achieve improved overall capacity and/or link quality without departing from the scope of the systems and methods described herein.
It should be noted that certain passages of this disclosure may reference terms such as “first” and “second” in connection with devices, mode of operation, transmit chains, antennas, etc., for purposes of identifying or differentiating one from another or from others. These terms are not intended to merely relate entities (e.g., a first device and a second device) temporally or according to a sequence, although in some cases, these entities may include such a relationship. Nor do these terms limit the number of possible entities (e.g., devices) that may operate within a system or environment.
It should be understood that the systems described above may provide multiple ones of any or each of those components and these components may be provided on either a standalone machine or, in some embodiments, on multiple machines in a distributed system. In addition, the systems and methods described above may be provided as one or more computer-readable programs or executable instructions embodied on or in one or more articles of manufacture. The article of manufacture may be a floppy disk, a hard disk, a CD-ROM, a flash memory card, a PROM, a RAM, a ROM, or a magnetic tape. In general, the computer-readable programs may be implemented in any programming language, such as LISP, PERL, C, C++, C#, PROLOG, or in any byte code language such as JAVA. The software programs or executable instructions may be stored on or in one or more articles of manufacture as object code.
While the foregoing written description of the methods and systems enables one of ordinary skill to make and use what is considered presently to be the best mode thereof, those of ordinary skill will understand and appreciate the existence of variations, combinations, and equivalents of the specific embodiment, method, and examples herein. The present methods and systems should therefore not be limited by the above described embodiments, methods, and examples, but by all embodiments and methods within the scope and spirit of the disclosure.
The present application claims the benefit of and priority to U.S. Provisional Application No. 62/154,205, entitled “Receive Side Packet Aggregation,” filed Apr. 29, 2015, the entirety of which is hereby incorporated by reference
Number | Date | Country | |
---|---|---|---|
62154205 | Apr 2015 | US |