This disclosure relates to communication protocols. In particular, this disclosure relates to buffer management for wireless communication systems.
Continual development and rapid improvement in wireless communications technology have lead the way to increased data rates and extensive wireless functionality across many different environments, including the home and business environments. These developments and improvements have been driven in part by the widespread adoption of digital media, including high definition video, photos, and music. The most recent developments in wireless connectivity promise new functionality and data rates far exceeding rates that the 802.11n and the 802.11TGac standards provide. These recent developments include the Wireless Gigabit Alliance (WiGig) and 802.11TGad 60 GHz wireless specifications.
The 60 Ghz specifications provides data transmission rates of up to 7 Gbps in a single stream, which is more than 10 times faster than the highest data rate that the 802.11n multiple input multiple output (MIMO) standard supports. Another benefit of the 60 GHz specifications is that devices in the 60 GHz ecosystem will have the bandwidth to wirelessly communicate significant amounts of information without performance compromises, thereby eliminating the current need for tangles of cables to physically connect devices. WiGig compliant devices may, as examples, provide wireless docking station capability and wirelessly stream high definition video content directly from a Blu-Ray player to a TV with little or no compression required.
Improvements in buffer management are needed for such wireless communication systems, particularly to improve throughput for video, audio, and other types of streams, and more particularly for those streams that have not been guaranteed a particular Quality of Service (QoS).
This description relates to wireless communication under standards such as the IEEE 802.11 standards or the WiGig standards, including the 60 GHz wireless specification promoted by the Wireless Gigabit Alliance and the IEEE 802.11TGad standard. Accordingly, the discussion below makes reference to Service Periods (SPs), such as those defined by the WiGig standard. During the SPs, a source station will communicate, potentially, with multiple destination stations. The techniques described are not limited to WiGig SPs, however, and instead are applicable to any wireless communication protocol that provides for allocations of channel capacity to stations.
The stations may take many different forms. As examples, the stations may be cell phones, smart phones, laptop computers, personal data assistants, pocket computers, tablet computers, portable email devices, or people or animals equipped with transmitters. Additional examples of stations include televisions, stereo equipment such as amplifiers, pre-amplifiers, and tuners, home media devices such as compact disc (CD)/digital versatile disc (DVD) players, portable MP3 players, high definition (e.g., Blu-Ray™ or DVD audio) media players, or home media servers. Other examples of stations include musical instruments, microphones, climate control systems, intrusion alarms, audio/video surveillance or security equipment, video games, network attached storage, network routers and gateways, pet tracking collars, or other devices.
Stations may be found in virtually any context, including the home, business, public spaces, or automobile. Thus, as additional examples, stations may further include automobile audio head ends or DVD players, satellite music transceivers, noise cancellation systems, voice recognition systems, climate control systems, navigation systems, alarm systems, engine computer systems, or other devices.
As shown in
As noted above, a requesting station may specify the source station for any requested SP allocation using a source station identifier (e.g., a unicast source address), and may specify one or more destination stations. A multiple destination station identifier in the request may specify the multiple destination stations. The multiple destination station identifier may be, as examples, a broadcast identifier or multicast identifier (e.g., an identifier established for a predefined group of stations among all of the stations in the network). In other implementations, the requesting station may specify multiple destination stations with individual identifiers for the destination stations.
For the purposes of illustration,
The home media server 106 (or any other source station) may transmit data in one or more data frames or aggregations of data frames, such A-MPDU or A-MSDU aggregations. In that regard, the home media server 106 may, for example, organize and aggregate the data frames into media access control (MAC) level protocol data units (MPDUs) carried by Physical (PHY) layer protocol data units (PPDUs). In SP1, the home media server 106 transmits an aggregation 308 of data frames 310, 312, and 314 to the laptop 110.
During SP1 302, the home media server 106 sends the aggregation 308 to the laptop 110. Then, within the required interframe spacing 316, the laptop 110 block acknowledges, with the B/ACK frame 318, receipt of the data frames successfully received. In this example, the B/ACK 316 acknowledges successful receipt of data frames 310 and 314, but indicates reception failure for data frame 312. The home media server 106 retransmits the data frame 312 as the data frame 318. The laptop 110 now successfully receives the data frame 312 and sends an acknowledgement 320.
During SP2 304, the home media server 106 communicates the data frames 322 to the smartphone 112 and receives the ACK 324 from the smartphone. In SP3306, the home media server 106 communicates the data frames 326 to the gaming system 114, and receives the ACK 328. Each of the SPs 302-306 is supported within the transmit control logic in the source station by a buffer allocation. The buffer allocation provides memory space in which to store the data that the source station will transmit to the destination station. The transmit control logic dynamically adjusts the buffer allocation to facilitate improved throughput between the source station and the destination stations.
The TCL 400 includes, in this example, the onchip processor 406 that oversees the operation of a transmit (Tx) buffer manager 408, Tx engine 410, receive (Rx) engine 412, and an aggregation queue manager 414. The aggregation queue manager 414 may support hardware accelerated aggregation of frames into A-MPDUs, for example. The Tx engine 410 may include logic that, as examples, receives data for transmission from the DMA controller 418, packages the data into frames and that encodes, modulates, and transmits the frames onto the physical (PHY) layer 426 (e.g., an air interface when the stations are wireless stations). Similarly, the Rx engine 412 may include logic that, as examples, receives signals from the PHY layer 426, demodulates, decodes, and unpacks data in received frames, and passes the received data to the DMA controller 418 for storage in the system memory 404.
The onchip processor 406 may execute control firmware 416 or other program instructions that are stored in a firmware memory or other memory. A direct memory access (DMA) controller 418 provides a high speed and efficient data transfer mechanism between the system memory 404, the Tx engine 410 and the Rx engine 412. The system memory 404 need not be on the SoC, but may instead be off chip and connected to the DMA controller 416 or other logic in the TCL 400 through a bus interface that preferably provides a dedicated memory interface so that the TCL 400 can obtain the data needed for transmission to the destination stations without exposure to the variability in the transport layer connection 402. In one implementation, the system memory is 1.5 megabytes in size, but the size may vary widely depending on the implementation.
The Tx buffer manager 408 may dynamically allocate and deallocate memory buffers within the system memory to support specific SPs. In some implementations, the Tx buffer manager 408 creates and manages pointers to track the buffer allocations in the system memory 404, but the management may be accomplished in other ways. The Tx buffer manager 408 may be configured to allocate up to a predetermined maximum buffer allocation for an SP. The predetermined maximum may vary based on characteristics of the SP, the traffic that the SP is expected to support, the destination station for the SP, or based on other factors. As examples, the predetermined maximum buffer allocation may be 128 KB or 256 KB.
The Tx buffer manager 408 may not only create the buffer allocations in the system memory 404, but may also dynamically modify the buffer allocations during SPs to facilitate improvements in throughput. As will be explained in more detail below, the onchip processor 406 may monitor the remaining duration of a SP, by, as examples, reading a timing register in a set of status registers 428, by running and monitoring a timer or counter, or in other ways. As the SP approaches its end, the Tx buffer manager 408 may reduce the buffer allocation for the SP, and allocate the freed memory to a subsequent SP that has not yet started. The Tx buffer manager 408 may maintain a predetermined minimum buffer allocation for the current SP. Thus, the host may communicate data to the TCL 400 over the transport layer connection 402 for the subsequent SP in advance of the subsequent SP, and moreover may have additional buffer memory in which to store the data for the subsequent SP than might otherwise be available. As a result, when the subsequent SP begins, additional data is immediately available to transmit in the subsequent SP, leading to increased throughput.
Furthermore, the Tx buffer manager 408 can create and dynamically manage buffer allocations for destination stations that may currently be in a power saving mode. In other words, because the source station knows the SP schedule, the source station knows when data transmission may later begin to any particular destination station. Even when the destination station is currently in power saving mode, the destination station will wake up on schedule to receive data. The Tx buffer manager 408 may therefore allocate and dynamically adjust buffer allocations for stations currently in power saving mode to buffer in advance (or provide additional buffer) for the data that will be sent to the destination station after it awakens.
The BAL 500 determines whether the current SP has ended. If not, the BAL determines the remaining time for transmitting data in the current SP (504). As one example, the BAL may determine the remaining time in microseconds (US) as:
RemDataTimeUS=TSF.RemSpDurUS*SIFS_US−ACK_BA_Time_US;
where TSF.RemSpDurUS is the remaining duration in microseconds of the SP as a whole, SIFS_US is the short interframe spacing time in microseconds, and ACK_BA_Time is the time typically needed to receive and process a B/ACK from the destination station in microseconds.
The BAL 500 may also determine the maximum amount of data that could be transmitted given the remaining time for transmission in the current SP (506). The BAL 500 may determine the maximum amount as:
CurSpBufferKB=ceil (RemDataTimeUS*CurRfThroughput/(1024*8*factor));
where CurSpBufferKB is the maximum amount of data that could be transferred given the remaining SP transmit duration, CurRfThroughput is the current data transmission rate over the RF interface in bits per microsecond, 1/(1024*8) converts to KBs per microsecond, and ‘factor’ is a variable tuning parameter that may be used to increase or decrease the CurSpBufferKB result to accommodate for uncertainties or to provide a variable guard around the calculation.
The BAL 500 determines when CurSpBufferKB is less than the current maximum buffer size allocated to the traffic stream active in the current SP. The current maximum buffer size is shown in
When the current maximum buffer size exceeds the amount of data that could be transmitted, then the BAL 500 dynamically updates the buffer allocation for the current SP (508). In one implementation, the BAL 500 frees a specific amount of memory by reducing the buffer allocation currently given to the SP. For example:
FreedTbmKB=CurTS.MaxTbmKB−CurSpBufferKB;
In other words, the BAL 500 calculates an amount of buffer allocation to free as the excess in the current maximum buffer allocation above the maximum amount of data that could possibly be transmitted. The BAL 500 then reduces the current buffer allocation for the traffic stream in the SP, e.g., to no be no larger than the maximum amount of data that could possibly be transmitted given the remaining SP time:
TBM[CurTS].MaxTbmKB=CurSpBufferKB;
The BAL 500 also updates the buffer allocation for a subsequent SP (e.g., the next SP) (510). For example:
TBM[NextTS].MaxTbmKB=Min(FreedTbmKB, NextTS.MaxTbmKB);
In other words, the BAL 500 sets the buffer allocation for the next SP (more specifically, for the next traffic stream TS in the next SP), to the minimum of: 1) the amount of buffer memory freed from the current SP and 2) the maximum buffer size that could be assigned for the next SP (more specifically, the maximum buffer size for the next traffic stream in the next SP). In general, the buffer allocation updates may be made for any subsequent SP or TS, not only the next SP or TS. The BAL 500 may also increment the buffer size for a subsequent SP by the amount of buffer memory freed from the current SP. Therefore, a subsequent SP has buffer memory allocated to it, or has additional buffer memory allocated to it, in the amount of buffer memory freed from the current SP. As a result, the host may begin to transfer data to the TCL 400 for a subsequent SP, or transfer additional data to the TCL 400 for the subsequent SP in advance of the subsequent SP. Additional data is therefore ready for transmission in the TCL 400 immediately when the subsequent SP starts, leading to increase throughput for the subsequent SP.
With respect again to
The BAL 500 also updates the buffer allocation for the SP which has ended (516). For example, the BAL 500 may set the buffer allocation for the SP which has ended to a minimum level:
TBM[CurTS].MaxTbmKB=CurTS.MinTbmKB;
In preparation for the start of the subsequent SP, the BAL 500 may also set the buffer allocation for the TS active in the next SP that is about to start to a predetermined maximum buffer size, e.g., 128 KB or 256 KB, which may be different or the same as the maximum buffer size for the TS in the SP that has just ended (518):
TBM[NextTS].MaxTbmKB=NextTS.MaxTbmKB;
To summarize:
CurTS.MaxTbmKB: represents maximum buffer size assignable to Traffic Stream active in current SP.
NextTS.MaxTbmKB: represents maximum buffer size assignable to Traffic stream active in next SP.
TBM[CurTS].MaxTbmKB=represents the maximum buffer usable by the current active TS in the current SP.
TBM[NextTS].MaxTbmKB=represents the maximum buffer usable by the TS that will be active in the next SP.
Diagram 604 shows the SP1 buffer allocation 608 for the initial and current SP, SP1, and the SP2 buffer allocation 610 for the subsequent SP, SP2. The buffer allocation 608 shows that SP1 has been allocated, initially, a maximum amount of buffer (e.g., 128 KB), while the buffer allocation 610 shows that the subsequent SP2 has been allocated only some predetermined minimal amount of buffer from the system memory 404. The SP1 transport layer activity 612 shows that the host is using the bus to send data to the TCL 400 in preparation for SP1. Little to no SP2 transport layer activity 614 occurs for SP2 until later, as will be explained. The initial flow of data from the host to the TCL 400 increases the number of SP1 frames 616 in system memory 404. The number of SP2 frames 618 in system memory remains minimal to none until later, as will also be explained.
In
The BAL 500 monitors the remaining duration of SP1. At the time indicated by reference numeral 620 (about 5500 uS in this example), the BAL 500 begins to reduce the buffer allocation for SP1, as shown by the decreasing SP1 buffer allocation 608. The reduction may proceed as described in detail above with respect to
SP2 will not begin until approximately time 6000 uS. Between 5500 uS and 6000 uS, however, note that the host communicates data for SP2 to the TCL 400 over the transport control layer 402. This communication activity is shown by the SP2 transport layer activity 614, and by the increase in the number of frames in system memory 404 for SP2, as shown by the number of SP2 frames 618. As a result, when SP2 begins, the system memory 404 already has stored more data than it ordinarily would have in advance for SP2. Thus, the Tx engine 410 may send more data more quickly for SP2, resulting in improvements in throughput.
Similar levels of throughput may be achieved using only a 128 KB maximum buffer allocation and the dynamic buffer adjustment described above, compared to a static 256 KB buffer allocation.
The host processor 1004 executes the logic 1010. The logic 1010 may include an operating system, application programs, or other logic. The host processor 1004 is in communication with the TCL 400. As described above, the TCL 400 may handle transmission and reception of data over the physical layer 426. To that end, the TCL 400 receives data for transmission from the host processor 1004 and host memory 1006, and provides received data to the host processor 1004 and host memory 1006. The TCL 400 executes dynamic buffer allocation logic as described above. The TCL 400 may take the form of a dedicated ASIC, SoC, or other circuitry in the station 100 that interfaces with the host processor 1004 to transmit and receive data over the physical layer 426. As a result, the station 1000 may experience improved throughput for its communications to destination stations. The station 1000 may take many forms, as noted above, and is not limited to a home media server 106.
The dynamic buffer management noted above facilitates increased throughput for video, audio, and other types of streams, whether communicated over a wired or wireless physical medium. The dynamic buffer management may also provide a level of throughput, using a smaller maximum buffer allocation, that is close to or that exceeds the level of throughput using a larger fixed buffer allocation. The dynamic buffer management particularly facilitates throughput increases for those streams that have not been guaranteed a particular Quality of Service (QoS).
The methods, stations, and logic described above may be implemented in many different ways in many different combinations of hardware, software or both hardware and software. For example, all or parts of the station may include circuitry in one or more controllers, microprocessors, or application specific integrated circuits (ASICs), or may be implemented with discrete logic or components, or a combination of other types of circuitry. All or part of the logic may be implemented as instructions for execution by a processor, controller, or other processing device and may be stored in a machine-readable or computer-readable medium such as flash memory, random access memory (RAM) or read only memory (ROM), flash memory, erasable programmable read only memory (EPROM) or other machine-readable medium such as a compact disc read only memory (CDROM), or magnetic or optical disk. While various embodiments of the invention have been described, it will be apparent to those of ordinary skill in the art that many more embodiments and implementations are possible within the scope of the invention. Accordingly, the invention is not to be restricted except in light of the attached claims and their equivalents.