The present disclosure relates generally to wireless communication networks and more particularly to link adaptation techniques and stochastically selecting a transmit parameter from a set of available transmit parameters.
An infrastructure-based wireless network typically includes a communication network with fixed and wired gateways. Many infrastructure-based wireless networks employ a mobile unit or host which communicates with a fixed base station that is coupled to a wired network. The mobile unit can move geographically while it is communicating over a wireless link to the base station. When the mobile unit moves out of range of one base station, it may connect or “handover” to a new base station and starts communicating with the wired network through the new base station.
In comparison to infrastructure-based wireless networks, such as cellular networks or satellite networks, ad hoc networks are self-forming networks which can operate in the absence of any fixed infrastructure, and in some cases the ad hoc network is formed entirely of mobile nodes. An ad hoc network typically includes a number of geographically-distributed, potentially mobile units, sometimes referred to as “nodes,” which are wirelessly connected to each other by one or more links (e.g., radio frequency communication channels). The nodes can communicate with each other over a wireless media without the support of an infrastructure-based or wired network. Links or connections between these nodes can change dynamically in an arbitrary manner as existing nodes move within the ad hoc network, as new nodes join or enter the ad hoc network, or as existing nodes leave or exit the ad hoc network.
Wireless media are inherently volatile and channel quality of a wireless communication link between a transmitter node and a receiver node can vary considerably. The channel quality of a wireless link depends on radio propagation parameters such as path loss, shadow fading, multi-path fading, co-channel interference from other transmitting nodes, sensitivity of the receiver node, available transmitter power margin, and the like. To help ensure a certain level of performance for data communications between nodes, Qualities of service (QoS) procedures are often implemented. QoS procedures take place at several communication layers in a communication protocol stack. For example, at the physical layer, QoS is synonymous with signal-to-interference-and-noise ratio (SINR) or bit error rate (BER) at the receiver of each user. At the data link control (DLC) layer and medium access control (MAC) layer, QoS is usually expressed by packet error rate (PER), minimum achievable data rate and maximum tolerable delay guarantees for users. At higher layers, QoS can be perceived as certain data throughput, delay, delay jitter guarantees, or in terms of fairness in rate allocation. In multi-hop networks, QoS has meaning at the network layer in terms of end-to-end bandwidth/delay guarantees.
To meet QoS requirements, many modern wireless communications systems employ link adaptation techniques, sometimes referred to as adaptive transmission techniques or adaptive modulation and coding (AMC) techniques, to improve throughput or data transmission rates (bits/sec) while maintaining an acceptable bit error rate (BER) at the receiver node regardless of link quality. To implement link adaptation techniques, nodes can dynamically adapt one or more transmit parameters depending on integrity and quality of the channel or link between two nodes. For example, a transmitter node can use transmission feedback information (e.g., received signal quality) to dynamically adapt or select one or more transmit parameters to “match” the channel conditions on the radio link as those conditions change. By exploiting channel information present at the transmitter node, an optimal combination of transmit parameters can be selected so that data throughput (i.e., transmission rates) and system capacity are improved or optimized while acceptable bit error rates (BERs) are achieved. At the physical layer, link adaptation technologies can be used to adjust transmit parameters such as transmission data rate, transmission power, modulation level, symbol rate, coding rate and other signal and protocol parameters to mitigate fluctuation in link quality and to maintain acceptable link quality. A number of different strategies can be used for selecting particular transmit parameters from a set of available transmit parameters.
The accompanying figures, where like reference numerals refer to identical or functionally similar elements throughout the separate views, together with the detailed description below, are incorporated in and form part of the specification, and serve to further illustrate embodiments of concepts that include the claimed invention, and explain various principles and advantages of those embodiments.
Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of embodiments of the present invention.
The apparatus and method components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
In one implementation, a link adaptation method is provided. A node stores a set of transmit parameters (ν1 . . . νi) and corresponding selection probabilities (P(ν1) . . . P(νi)) for each of the transmit parameters (ν1 . . . νi). Each of the transmit parameters (ν1 . . . νi) is associated with a corresponding selection probability. The node stochastically selects a particular transmit parameter from a set of available transmit parameters based on the selection probabilities (P(νi) . . . P(νi)) associated with each of the transmit parameters, and then transmits a packet according to the particular transmit parameter. The node then receives transmission feedback information (TFI) which relates to transmission of the packet according to the particular transmit parameter, and uses the transmission feedback information to derive performance statistics related to the transmission of the packet according to the particular transmit parameter. The performance statistics comprise an average performance (μp) associated with transmission of the packet according to the particular transmit parameter and a performance standard deviation (σp) function for transmission of the packet according to the particular transmit parameter. The node uses the performance statistics to specify an estimated performance function (EPF) (F) for the particular transmit parameter that reflects estimated performance associated with the particular transmit parameter. The EPF (F) comprises an aggressiveness factor (β) for controlling aggressiveness of the EPF (F). The node determines a normalized performance ({tilde over (F)}i) that is the ratio of a performance (Fi) for transmit parameter i and maximum performance (Fmax), and calculates a stretch factor parameter (α) for a target performance ratio (γ). The node then updates a selection probability computation function (SPCF) based on the stretch factor parameter (α) and the measured normalized performance ({tilde over (F)}i) to generate updated selection probabilities selection probabilities (P(ν1) . . . P(νi)) corresponding to each transmit parameter (ν1 . . . νi) in the set of available transmit parameters (ν1 . . . νi).
In the following description, for purposes of illustrating one example application of the methods and apparatus described herein, opportunistic link adaptation techniques will be described. Although this description is provided to illustrate one example of one possible application of the disclosed techniques, similar techniques can be implemented in a wide variety of other applications for stochastically selecting a sample/object/parameter from a set of samples/objects/parameters to optimize performance of a particular system. For example, the similar techniques can be used to optimize transmit power, fragmentation sizes, channel bandwidth, channel frequency, acknowledgement policy, access control parameters, error-correction coding schemes.
Communication Network
As used herein, the term “transmitter/source node 120” is defined to be the source of the transmission, not necessarily the source of a particular packet. As used herein, the term “receiver/destination node 130” is defined to be the destination for the transmission, not necessarily the final destination for a particular packet. The term “receiver/destination node 130” can refer to a neighbor node or a next hop node from the transmitter/source node 120. In some situations, the receiver/destination node 130 can be the actual destination of a packet transmitted from the transmitter/source node 120.
A brief overview of some conceptual components in a node will be provided with reference to
The processor 201 includes one or more microprocessors, microcontrollers, DSPs (digital signal processors), state machines, logic circuitry, or any other device or devices that process information based on operational or programming instructions. Such operational or programming instructions are stored in the program memory 209. The program memory 209 can be an IC (integrated circuit) memory chip containing any form of RAM (random-access memory) and/or ROM (read-only memory), a floppy disk, a CD-ROM (compact disk read-only memory), a hard disk drive, a DVD (digital video disc), a flash memory card or any other medium for storing digital information. One of ordinary skill in the art will recognize that when the processor 201 has one or more of its functions performed by a state machine or logic circuitry, and that the memory 209 containing the corresponding operational instructions may be embedded within the state machine or logic circuitry. Although not illustrated in
The antenna 206 comprises any known or developed structure for radiating and receiving electromagnetic energy in the frequency range containing the carrier frequencies.
The transmitter circuitry 203 and the receiver circuitry 205 enable the node 200 to communicate information to other nodes and to receive information from the other nodes. In this regard, the transmitter circuitry 203 and the receiver circuitry 205 include conventional circuitry to enable digital or analog transmissions over a communication channel. The implementations of the transmitter circuitry 203 and the receiver circuitry 205 depend on the implementation of the node 200. For example, the transmitter circuitry 203 and the receiver circuitry 205 can be implemented as a modem, or as conventional transmitting and receiving components of two-way communication devices. When the transmitter circuitry 203 and the receiver circuitry 205 are implemented as a modem, the modem can be internal to the node 200 or operationally connectable to the node 200 (e.g., embodied in a radio frequency (RF) modem implemented on Network Interface Card (NIC) such as a Personal Computer Memory Card International Association (PCMCIA) compliant card). For a communication device, the transmitter circuitry 203 and the receiver circuitry 205 are preferably implemented as part of the device hardware and software architecture in accordance with known techniques.
The receiver circuitry 205 is designed to receive wired or wireless signals within multiple frequency bandwidths. The receiver circuitry 205 may optionally comprise any number of receivers such as a first receiver and a second receiver, or one receiver designed to receive in two or more bandwidths. The transceiver 202 includes at least one set of transmitter circuitry 203. At least one transmitter 203 may be designed to transmit to multiple devices on multiple frequency bands. As with the receiver 205, dual or multiple transmitters 203 may optionally be employed.
Most, if not all, of the functions of the transmitter circuitry 203 and/or the receiver circuitry 205 and/or link adaptation module 215 may be implemented in a processor, such as the processor 201. It will be appreciated, however, that the processor 201, the transmitter circuitry 203, the receiver circuitry 205 and link adaptation module 215 have been artificially partitioned herein to facilitate a better understanding.
As described below, the link adaptation module 215 of the node 200 can dynamically select transmit parameters for transmitting data packets over a wireless communication link 110 to the receiver/destination node 130.
The link adaptation module 310 can be implemented as part of the medium access controller module 340 or as a separate sub-layer or sub-module 310 which communicates with the medium access controller module 340. The link adaptation module 310 can generally be used to select a particular transmit parameter from a set of available transmit parameters as part of a link adaptation method 400 to optimize performance of a transmitter node. As used herein, the term “transmit parameter” can generally refer to link adaptation parameters including physical layer parameters such as a transmission data rate, transmission power level, coding rate, modulation level and transmission power. As used herein, the term “performance” can generally refer to parameters which can be used to characterize performance of a transmission over a wireless link, such as transmission throughput, channel efficiency, transmission time, transmission energy, queuing delay, end-to-end latency, voice/video quality, etc.
Operations of the MAC module 340 and the physical layer module 350 are well-known in the art and will not be described in detail herein. In
The link adaptation module 310 comprises a stochastic transmit parameter selection module (STPSM) 325, a selection probability calculation module (SPCM) 360 that includes a transmission statistics computation module (TSCM) 362, a performance estimator module 365 and a selection probability computation module 370, and a storage block 380 that stores a finite list of available transmit parameters and their corresponding selection probabilities.
In one embodiment, each particular value (νi) of a transmit parameter is associated with a particular selection probability P(νi) and vice-versa. Each selection probability P(νi) is dependent on an estimated performance function (EPF) (Fi) that is being controlled. The sum of all selection probabilities P(νi) is one (1). As used herein, the term “selection probability” P(νi) refers to a percentage that reflects a number of times a particular transmit parameter (νi) (e.g., a particular data rate) in a set of possible transmit parameters (ν1, . . . , νi) is to be selected with respect to all other possible selection probabilities P(νi).
In one implementation, illustrated below in Table 1, the storage block 380 can store the list of available transmit parameters (ν1, . . . νi) and their corresponding selection probabilities (P(νi), . . . , P(νi)) in a look-up table, where each of the selection probabilities (P(νi), . . . , P(νi)) is an estimated performance function (P1, . . . , Pi). In one implementation, the link adaptation module 310 can initially be pre-loaded with a set of default transmit parameters (ν1, . . . νi) that are then adaptively re-computed each time a packet is transmitted (or alternatively, each time method 400 executes). Techniques for updating the transmit parameters (ν1, . . . νi) and their corresponding selection probabilities (P(νi), . . . , P(νi)) will be described in greater detail below.
As indicated at block 320 and step 420, the link adaptation module 310 operates any time the node prepares to transmit a packet or group of packets (or other unit of data such as a frame, block, or pre-defined transmission burst) over a wireless communication link or “over-the-air (OTA).” When the link adaptation module 310 prepares to send the packet, at step 430, the STPSM 325 stochastically selects one of the transmit parameters from the set of available transmit parameters (ν1, . . . νi) that is stored at storage block 380 based on the selection probabilities associated with each of the transmit parameters. In one implementation, the STPSM 325 comprises a random number generator or other pre-computed table which generates a random number that is compared to the set of selection probabilities. The transmit parameter that corresponds to the random number generated is selected for transmission. This procedure can be efficiently performed using lookup tables that may be preloaded or computed at runtime.
At step 440, the link adaptation module 310 sends the packet to the MAC module 340 along with an indication that the packet is to be transmitted according to the selected transmit parameter. The MAC module 340 processes the packet to prepare it for transmission over the wireless link in accordance with the selected transmit parameter, and passes the packet to the physical layer module 350, where the packet is prepared for transmission in accordance with the selected transmit parameter. At step 450, the physical layer module 350 then transmits the packet, which is transmitted in accordance with the selected transmit parameter, over the wireless link towards its destination node.
After transmitting the packet according to the selected transmit parameter, at step 460, the physical layer module 350 eventually receives feedback information regarding the transmission (referred to herein as “transmission feedback information”) from other nodes and passes this transmission feedback information up the protocol stack. In one implementation, the transmission feedback information may comprise explicit acknowledgment (ACK) messages sent from the destination node(s) that indicate that the data sent by the source node arrived properly (i.e., that report whether the source node's data payload has been successfully delivered to the destination node) or explicit negative acknowledgment (NACK) messages sent from the destination node which indicate that the destination node has detected a problem with the data payload. In other implementations, the transmission feedback information may take the form of implicit ACK/NACK messages (i.e., where the source node does not receive any response from the destination node, and the source node assumes that there was a problem with the data payload such as a lost packet). Such transmission feedback may occur immediately or be delayed using a variety of delayed-acknowledgement methods. Transmission feedback information is processed at the time it is received in order to keep performance estimation as up-to-date as possible.
The transmission statistics computation module (TSCM) 362 uses/processes the transmission feedback information received over the wireless link to determine or derive (e.g., collect, measure and/or compute) transmission statistics related to the transmission of the packet that was transmitted at (or in accordance with) the selected transmit parameter. The transmission statistics are “indicators of transmission performance” and can include, for example, the mean and standard deviation of the estimated transmission performance function. Actual transmission performance for a particular setting is not measurable due to the fact that the link is live (i.e., being used for communication and probing at the same time). Different transmit parameters are used at different times to gather information about performance using different settings; the overall performance is a function of the usage of a variety of transmit parameters. Only an aggregate performance can be directly measured. Individual performance for each transmit parameter has to be estimated using limited information spread over a period of time. Thus, a transmit parameter that is rarely used will have a lot of uncertainty associated with it. Inversely, a transmit parameter that is used often will have a lower uncertainty—its actual performance will be closer to the average measured performance. Thus, it is beneficial to use both an average performance together with an uncertainty factor (such as a standard deviation) in estimating the performance of a link using a particular transmit parameter. In the implementation illustrated in
At step 460, the transmission statistics computation module (TSCM) 362 uses the transmission feedback information to derive performance statistics related to the transmission of the packet that was transmitted at the selected transmit parameter. In one implementation, the transmission statistics computation module (TSCM) 362 processes transmission feedback information received over the wireless link to determine (e.g., collect, measure and/or compute) the performance statistics related to transmission of the packet that was transmitted at the selected transmit parameter. In this particular implementation, the performance statistics are “indicators of transmission performance” and can include, for example, an average performance (μF) and a performance standard deviation (σF). The average performance (μF) is straightforward to compute. The average performance (μF) is a function that estimates performance at one particular transmit parameter. There are as many estimates as there are transmit parameters. The performance standard deviation (σF) is largely dependent on a choice of standard deviation measurement method, and a choice of confidence interval.
Estimated Performance Function (EPF)
The performance statistics are provided to the performance estimator module 365 of the link adaptation module 310. At step 465, the performance estimator module 365 uses the performance statistics (for the packet(s) that were transmitted at the selected transmit parameter) to create or update the estimated performance function (EPF) for the selected transmit parameter. In this implementation, the selection probability computation module 370 generates the EPF based on the average performance (μF) and the performance standard deviation (νF). Stated differently, the average performance (μF) and the performance standard deviation (σF) are used to define estimated performance function (EPF) (F) that reflects estimated performance associated with the selected transmit parameter. An example of the EPF is shown below in equation (A1).
F=μ
F−(σF·(1−2β)) (A1)
As noted above, the performance standard deviation (σp) is largely dependent on a choice of standard deviation measurement method and a choice of confidence interval, and this dependency can be used to control the “aggressiveness” of the transmit parameter selection algorithm. Changing the confidence interval or the standard deviation measurement method does not make one transmit parameter more likely to be better than another from a performance standpoint.
Aggressiveness Factor (β)
In the estimated performance function (EPF), an aggressiveness factor (β) is used to change the EPF (F) from a lower end of an arbitrary confidence interval to an upper end.
There are two rationales for measuring the standard deviation (σF) of the performance estimate.
If transmit parameters are to be tried that are potentially better but have not been investigated or tried often, it makes sense to favor transmit parameters with large means (or averages) and large standard deviations, which indicates that these transmit parameters have not been tried often and their prediction is good. In this “aggressive” case, a large value of the EPF (F) is the most favorable criterion, and therefore it is beneficial to use a larger aggressiveness factor (β), where an aggressiveness factor (β) of one (1) is the most aggressive setting. If aggressiveness factor (β) is 1 then the value of the EPF (P) is equal to the sum of the average performance (μF) and the performance standard deviation (σF), and is therefore at the high end of the range of possible values of the EPF (F).
If the objective is to use transmit parameters that are known to perform well, it is beneficial to favor transmit parameters with large means and small standard deviations (which indicate that they are good, based on past experience). In this “conservative” case, a small value of the EPF (F) is the most favorable criterion, and therefore it is beneficial to use a smaller aggressiveness factor (β), where an aggressiveness factor (β) of zero (0) is the most conservative setting. If aggressiveness factor (β) is 0 then the value of the EPF (F) is equal to the difference between the average performance (μF) and the performance standard deviation (σF), and is therefore at the low end of the range of possible values of the EPF (F).
An aggressiveness factor of 0.5 means that the value of the EPF (F) will always be the average performance (μF), which will naturally favor optimistic predictions.
In one implementation, performance standard deviation (σF) is preferably calculated using a student t-distribution given that the number of samples is often limited by the available bandwidth of the system. In the context of transmit parameter selection, it is preferable to use an aggressive approach for initial trials and a conservative for final trials (those that occur close to a packet being dropped and no longer retried). If the overall success transmit parameter is fairly low (for example, when the system is being trained or the environment changes too fast), an initial aggressive approach can be subdued, and made more conservative. The range of variation of performance standard deviation (σF) is driven by the confidence level assigned to the t-distribution since the performance standard deviation (σF) itself is not being estimated, but rather the confidence interval of the average performance (μF) is being estimated.
In one implementation, the EPF is converted into a selection probability or selection order. The EPFs are actual possible outcomes (i.e., not random variables). The performance estimates are random variables, and therefore a probability of one being larger than the other can be established. This probability, however, is equivocal (equal to 0.5) when there are few measurements. The “approach”, therefore, is a better criterion for selecting an order. Arbitrarily, an EPF that is twice as large should be twice as likely to be selected. This can be generalized to:
P(Fi)=kα·P(Fj) for Fi=k·Fj (B1).
The EPFs (P(Fi), P(Fj)) are performance estimate functions associated with particular transmit parameters i, j. The performance of a particular transmit parameter is an “estimate” since it can not be measured when it is only used sporadically. In equation (B1), k represents the ratio of an estimated performance function (EPF) associated with transmit parameter i (P(Fi)) with respect to an estimated performance function (EPF) associated with transmit parameter j (P(Fj)), and the parameter α is a “stretch factor parameter.” The stretch factor parameter (α) determines the proportionality relationship between the priority order and the performance estimation. The setting of the stretch factor parameter (α) depends on how much penalty is acceptable from a performance perspective as compared to the maximum performance in order to “try out” other possibilities. A large stretch factor parameter (α) allows for larger performance parameters/estimates to be selected at the exclusion of lower performances. A small stretch factor parameter (α) (e.g., close to 0) flattens the spectrum of choice. For an infinitely small stretch factor parameter (α), all transmit parameters are basically equiprobable. This of course is only useful if all possibilities are to be tried evenly to set a baseline.
As will be described below, the stretch factor parameter (α) is ultimately used to update the SPCF at step 470, and the updated SPCF is then used to compute selection probabilities at step 480. Before describing step 470, the SPCF and its derivation, and techniques for calculating stretch factor parameter (α) will now be described.
Selection Probability Computation Function (SPCF)
As noted above, the selection probability computation module 370 includes a SPCF. Derivation of the SPCF will now be described.
The average performance (Fμ) is expressed in equation (B2).
It is to be noted that the average performance (Fμ) in equation (B2) is different than the average performance (μF) defined above with respect to equation (1). Equation (B2) is the average performance (Fμ) of all the transmit parameters taken together and weighted according to their selection probabilities (P(Fi). In other words, equation (B2) represents the “final” performance as experienced by the user/system.
All performances can be normalized to the maximum performance (Fmax) to define a normalized performance ({tilde over (F)}i) as shown in equation (B3). The normalized performance ({tilde over (F)}i) is the ratio of a performance (Fi) and maximum performance (Fmax).
Based on equation (B1), the probability of each performance can be written as:
Combining equation (B4) with (B3) provides equation (B5).
P(Fi)={tilde over (F)}iα·P(Fmax) (B5)
Equation (B5) is the SPCF that is used by the selection probability computation module 370 at step 480 to compute selection probabilities P(F) associated with each performance value relative to the selection probability of the maximum performance P(Fmax) being selected. The SPCF requires that an appropriate value is calculated for the stretch factor parameter (α).
Calculating the Stretch Factor Parameter (α)
Based on equations (B1) and (B4), the sum of the individual selection probabilities (B1) is:
The sum of all probabilities must be equal to 1, and therefore equation (B6) can be expressed as:
In other words, the selection probability of the maximum performance (P(Fmax)) is inversely proportional to the sum of the normalized performance (Fi) raised to the stretch factor parameter (α) power, and the value of stretch factor parameter (α) can be constrained such that the average performance (Fμ) fulfills certain requirements. For example, the average performance (Fμ) should not drop based on the fact that the system is “trying out” transmit parameters, and should not be (1−γ)% lower than the maximum performance (Fmax) possible, where γ represents a target performance ratio. This constraint can be expressed in equation (B8) as:
F
μ
≧γ·F
max (B8).
When the average performance (Fμ) in equation (B2) is related to the maximum performance (Fmax) and the normalized performance ({tilde over (F)}i), then equation (B2) can be represented as equation (B9).
The relationship in equation (B9) can be rewritten as expressed in equation (B10).
Expanding the term
of equation (B10) using the selection probability of each performance as defined in equation (B5) results in equation (B11).
The combination of equations (B10) and (B11) results in:
Equation (B8) assumes that the ratio of the average performance (Fμ) to the maximum performance (Fmax) is within the target performance ratio (γ)
and therefore equation (B12) can be expressed as described in equation (B13).
When equation (B13) is combined with equation (B7) the result is described in equation (B14), which can then be re-written as equation (B15).
At step 465, the performance estimator module 365 determines a stretch factor parameter (α) to fulfill inequality (B15) for the selected target performance ratio (γ). Different algorithms may be used to solve inequality (B15) to find a value of stretch factor parameter (α). To find a root of the nonlinear equation, minimization of the function can be performed by using Newton's method, the secant method, Brent's method, and the like. Newton-Girard formulas may be used for integer stretch factor parameter (α) values by using the identities between the symmetric polynomials and the sums of the αth powers of their variables. Although all these well-known methods can be used to find stretch factor parameter (α) values, the optimal embedded implementation is a look-up table. For each quantized value of {tilde over (F)}i (between 0 and 1) and quantized value of stretch factor parameter (α) (somewhere between 1 and 10), a look-up table can be computed for all {tilde over (F)}iα.
Computing Selection Probabilities For Each Transmit Parameter
After α has been calculated, at step 470, the selection probability computation module 370 uses the stretch factor parameter (α) and the normalized performance (Fi) of the EPF to update a selection probability computation function (SPCF). At step 480, the selection probability computation module 370 can then calculate a selection probability for each of the transmit parameters using the formula from equation (B5) which is reproduced below.
P(Fi)={tilde over (F)}iα·P(Fmax) (B5)
More specifically, the selection probability computation module 370 can use the SPCF of equation (B5) to compute/calculate/determine updated selection probabilities selection probabilities (P(F1) . . . P(Fi)) corresponding to each transmit parameter (ν1 . . . νi) in the set of available transmit parameters (ν1 . . . νi) so that each transmit parameter (νi) is associated with a selection probability (P(Fi)). By selecting an appropriate value for the stretch factor parameter (α), the selection probability computation module 370 weights the selection probability P(Fi) for each of the transmit parameters (ν1 . . . νi) to optimize the EPF such that the average performance (Fμ) is guaranteed to be at a certain percentage of the maximum performance (Fmax). As explained above, the SPCF is designed to help ensure that the average performance (Fμ) will converge towards the maximum performance (Fmax) (Fμ=γ·Fmax), which ensures a certain minimum performance.
The selection probability computation module 370 then communicates the updated selection probabilities (P(F1) . . . P(Fi)) for each of the transmit parameters to the storage block 380. The storage block 380 stores the selection probabilities associated with each of the transmit parameters. The method 400 then loops back to step 420, where the method 400 repeats on the next packet to be sent.
For purposes of illustrating one example application of the methods and apparatus for opportunistic link adaptation described herein, an example will now be described where the transmit parameter being adapted is a data rate selected from a set of available data rates to optimize data throughput. However, it is to be appreciated that the term “transmit parameter” is not to be construed as being limited to a “data rate.” Likewise, it is should also be appreciated that although the following description describes techniques for selecting a data rate from a set of available data rates to optimize “throughput,” in general the stochastic selection techniques can be implemented in a wide variety of other applications and/or technical fields for selecting a sample/object/parameter from a set of samples/objects/parameters to optimize other types of performance output functions.
In this implementation, the storage block 380 of
As indicated at step 520, the link adaptation module 310 operates as the node prepares to transmit a packet over a wireless communication link.
At step 530, when the link adaptation module 310 prepares to send the packet, the STPSM 325 stochastically selects an appropriate one of the data rates (DR1 . . . DRi) based on the calculated selection probabilities (P(DR1) . . . P(DRi)) assigned to each data rate (DR1 . . . DRi). In one implementation, a random number generator or other pre-computed table can be used select the DRi.
Thus, a particular data rate value can be selected based on the probability of that particular data rate providing better throughput in comparison to probabilities of other data rates providing better throughput. The order in which the stochastic parameter selection module 425 selects one of the particular data rates is chosen so that advantageous but “risky” data rates are tried first, and conservative yet “safe” data rates are tried last. This selection is called opportunistic because it allows for better data rates to be tried while ensuring that data will be successfully transmitted nonetheless. As will be described below, the stochastic parameter selection module 325 is designed to select data rates such that the link adaptation module 310 focuses or “dwells on” particular ones of the data rates that provide better performance, and only sporadically selects data rates which provide poor performance to build up a knowledge database. Among other benefits, throughput is improved and dead locks during rate selection disappear when there is a completion rate disparity between adjacent data rates
At step 540, the link adaptation module 310 sends the packet to the MAC module 340 along with an indication that the packet is to be formatted for transmission at the selected data rate. The MAC module 340 processes the packet to prepare it for transmission over the wireless link at the selected data rate, and passes the packet to the physical layer module 350, where the packet is formatted for transmission at the selected data rate. At step 550, the packet is then transmitted over the wireless link towards its destination at the selected data rate.
After transmitting the packet at the selected data rate, at step 555, the physical layer module 350 at a later time receives transmission feedback information from other nodes and passes this transmission feedback information up the protocol stack to the link adaptation module 310.
At step 560, the transmission statistics computation module (TSCM) 362 uses the transmission feedback information to derive throughput statistics related to the transmission of the packet that was transmitted at the selected data rate. In one implementation, the transmission statistics computation module (TSCM) 362 processes transmission feedback information received over the wireless link to determine (e.g., collect, measure and/or compute) the throughput statistics related to transmission of the packet that was transmitted at the selected data rate. In this particular implementation, the throughput statistics are “indicators of transmission throughput” and can include, for example, an average throughput (μTP) and a throughput standard deviation (σTP). The average throughput (μTP) is straightforward to compute. The average throughput (μTP) is a function that is an estimate of the throughput at one particular data rate; there are as many estimates as there are data rates. The throughput standard deviation (σTP) is largely dependent on a choice of standard deviation measurement method, and a choice of confidence interval.
Estimated Throughput Function (ETF)
The throughput statistics are provided to the performance estimator module 365 of the link adaptation module 310. At step 565, the throughput estimator module 365 uses the throughput statistics (for the packet(s) that were transmitted at the selected data rate) to generate or update the estimated throughput function (ETF) for the selected data rate. In this implementation, the selection probability computation module 370 generates the ETF based on the average throughput (μTP) and the throughput standard deviation (σTP). Stated differently, the average throughput (μTP) and the throughput standard deviation (σTP) are used to define estimated throughput function (ETF) (TP) that reflects estimated throughput associated with the selected data rate. An example of the ETF is shown below in equation (C1).
TP=μ
TP−(σTP·(1−2β)) (C1)
As noted above, the throughput standard deviation (σTP) is largely dependent on a choice of standard deviation measurement method and a choice of confidence interval, and this dependency can be used to control the “aggressiveness” of the data rate selection algorithm. Changing the confidence interval or the standard deviation measurement method does not make one data rate more likely to be better than another from a throughput standpoint.
Aggressiveness Factor (β)
In the estimated throughput function (ETF), an aggressiveness factor (β) is used to change the ETF (TP) from a lower end of an arbitrary confidence interval to an upper end.
There are two rationales for measuring the standard deviation (σTP) of the throughput estimate.
If the objective is to try data rates that are potentially better but have not been investigated or tried much, it makes sense to favor data rates with large means and large standard deviations (which indicates that they have not been tried much and their prediction is good). In this “aggressive” case, a large value of the ETF (TP) is the most favorable criterion, and therefore it is beneficial to use a larger aggressiveness factor (β), where an aggressiveness factor (β) of one (1) is the most aggressive setting. If aggressiveness factor (β) is 1 then the value of the ETF (TP) is equal to the sum of the average throughput (μTP) and the throughput standard deviation (σTP), and is therefore at the high end of the range of possible values of the ETF (TP).
If the objective is to use data rates that are known to perform well, it makes sense to favor data rates with large means and small standard deviations (which indicate that they are good, based on past experience). In this “conservative” case, a small value of the ETF (TP) is the most favorable criterion, and therefore it is beneficial to use a smaller aggressiveness factor (P), where an aggressiveness factor (β) of zero (0) is the most conservative setting. If aggressiveness factor (β) is 0 then the value of the ETF (TP) is equal to the difference between the average throughput (μTP) and the throughput standard deviation (σTP), and is therefore at the low end of the range of possible values of the ETF (TP).
An aggressiveness factor of 0.5 means that the value of the ETF (TP) will always be the average throughput (μTP), which will naturally favor optimistic predictions.
In one implementation, throughput standard deviation (σTP) is preferably calculated using a student t-distribution given that the number of samples is often limited by the available bandwidth of the system. In the context of data rate selection, it is preferable to use an aggressive stance for initial trials and a conservative for final trials (those that occur close to a packet being dropped and no longer retried). If the overall success data rate is fairly low (for example, when the system is being trained or the environment changes too fast), an initial aggressive stance can be subdued, and made more conservative. The range of variation of throughput standard deviation (σTP) is driven by the confidence level assigned to the t-distribution since the throughput standard deviation (σTP) itself is not being estimated, but rather the confidence interval of the average throughput (μTP) is being estimated.
In one implementation, the ETF is converted into a selection probability or selection order. The ETFs are actual possible outcomes (i.e., not random variables). The throughput estimates are random variables, and therefore a probability of one being larger than the other can be established. This probability, however, is equivocal (equal to 0.5) when there are few measurements. The “stance”, therefore, is a better criterion for selecting an order. Arbitrarily, an ETF that is twice as large should be twice as likely to be selected. This can be generalized to:
P(TPi)=kα·P(TPj) for TPi=k·TPj (D1).
The ETFs (TPi, TPj) are throughput estimate functions associated with particular data rates i, j. The throughput of a particular data rate is an “estimate” since it can not be measured when it is only used sporadically. In equation (D1), k represents the ratio of an estimated throughput function (ETF) associated with data rate i (TPi) with respect to an estimated throughput function (ETF) associated with data rate j (TPj), and the parameter α is a “stretch factor parameter.” The stretch factor parameter (α) determines the proportionality relationship between the priority order and the throughput estimation. The setting of the stretch factor parameter (α) depends on how much penalty is acceptable from a throughput perspective as compared to the maximum throughput in order to “try out” other possibilities. A large stretch factor parameter (α) allows for larger throughput parameters/estimates to be selected at the exclusion of lower throughputs. A small stretch factor parameter (α) (e.g., close to 0) flattens the spectrum of choice. For an infinitely small stretch factor parameter (α), all data rates are basically equiprobable. This of course is only useful if all possibilities are to be tried evenly to set a baseline.
As will be described below, the stretch factor parameter (α) is ultimately used to update the SPCF at step 570, and the updated SPCF is then used to compute selection probabilities at step 580. Before describing step 570, the SPCF and its derivation, and techniques for calculating stretch factor parameter (α) will now be described.
Selection Probability Computation Function (SPCF)
As noted above, the selection probability computation module 370 includes a SPCF. Derivation of the SPCF will now be described.
The average throughput (TPμ) is expressed in equation (D2).
It is to be noted that the average throughput (TPμ) in equation (D2) is different than the average throughput (μTP) defined above with respect to equation (1). Equation (D2) is the average throughput (TPμ) of all the data rates taken together and weighted according to their selection probabilities P(TPi). In other words, equation (D2) represents the “final” throughput as experienced by the user/system.
All throughputs can be normalized to the maximum throughput (TPmax) to define a normalized throughput ({tilde over (T)}Pi) as shown in equation (D3). The normalized throughput ({tilde over (T)}Pi) is the ratio of a throughput (TPi) and maximum throughput (TPmax).
Based on equation (D1), the probability of each throughput can be written as:
Combining equation (D4) with (D3) provides equation (D5).
P(TPi)={tilde over (T)}Piα·P(TPmax) (D5)
Equation (D5) is the SPCF that is used by the selection probability computation module 370 at step 580 to compute selection probabilities P(TP) associated with each throughput value relative to the selection probability of the maximum throughput P(TPmax) being selected. The SPCF requires that an appropriate value is calculated for the stretch factor parameter (α).
Calculating the Stretch Factor Parameter (α)
Based on equations (D1) and (D4), the sum of the individual selection probabilities (D1) is:
The sum of all probabilities must be equal to 1, and therefore equation (D6) can be expressed as:
In other words, the selection probability of the maximum throughput (P(TPmax)) is inversely proportional to the sum of the normalized throughput ({tilde over (T)}Pi) raised to the stretch factor parameter (α) power, and the value of stretch factor parameter (α) can be constrained such that the average throughput (TPμ) fulfills certain requirements. For example, the average throughput (TPμ) should not drop based on the fact that the system is “trying out” data rates, and should not be (1−γ) % lower than the maximum throughput (TPmax) possible, where γ represents a target performance ratio. This constraint can be expressed in equation (D8) as:
TP
μ
≧γ·TP
max (D8).
When the average throughput (TPμ) in equation (D2) is related to the maximum throughput (TPmax) and the normalized throughput (TPi), then equation (D2) can be represented as equation (D9).
The relationship in equation (D9) can be rewritten as expressed in equation (D10).
Expanding the term
of equation (D10) using the selection probability of each throughput as defined in equation (D5) results in equation (D11).
The combination of equations (D10) and (D11) results in:
Equation (D8) assumes that the ratio of the average throughput (TPμ) to the maximum throughput (TPmax) is within the target performance ratio (γ) (i.e.,
and therefore equation (D12) can be expressed as described in equation (D13).
When equation (D13) is combined with equation (D7) the result is described in equation (D14), which can then be re-written as equation (D15).
At step 565, the performance estimator module 365 determines a stretch factor parameter (α) to fulfill inequality (D15) for the selected target performance ratio (γ). Different algorithms may be used to solve inequality (D15) to find a value of stretch factor parameter (α). To find a root of the nonlinear equation minimization of the function can be performed by using Newton's method, the secant method, Brent's method, etc. Newton-Girard formulas may be used for integer stretch factor parameter (α) values by using the identities between the symmetric polynomials and the sums of the αth powers of their variables. Although all these well-known methods can be used to find stretch factor parameter (α) values, the most appropriate embedded implementation is a look-up table. For each quantized value of {tilde over (T)}Pi (between 0 and 1) and quantized value of stretch factor parameter (α) (somewhere between 1 and 10), a look-up table can be computed for all {tilde over (T)}Piα.
Computing Selection Probabilities For Each Data Rate
After α has been calculated, at step 570, the selection probability computation module 370 uses the stretch factor parameter (α) and the normalized throughput ({tilde over (T)}Pi) of the ETF to update a selection probability computation function (SPCF). At step 580, the selection probability computation module 370 can then calculate a selection probability for each of the data rates using the formula from equation (D5) which is reproduced below.
P(TPi)={tilde over (T)}Piα·P(TPmax) (D5)
More specifically, the selection probability computation module 370 can use the SPCF of equation (D5) to compute/calculate/determine) updated selection probabilities selection probabilities (P(TP1) . . . P(TPi)) corresponding to each data rate (DR1 . . . DRi) in the set of available data rates (DR1 . . . DRi) so that each data rate (DRi) is associated with a selection probability (P(TPi)). By selecting an appropriate value for the stretch factor parameter (α), the selection probability computation module 370 weights the selection probability TPi for each of the data rates (DR1 . . . DRi) to optimize the ETF such that the average throughput (TPμ) is guaranteed to be at a certain percentage of the maximum throughput (TPmax). As explained above, the SPCF is designed to help ensure that the average throughput (TPμ) will converge towards the maximum throughput (TPmax)(TPμ=γ·TPmax), which ensures a certain minimum throughput performance.
The selection probability computation module 370 then communicates the updated selection probabilities (P(TP1) . . . P(TPi)) for each of the data rates to the storage block 380. The storage block 380 stores the selection probabilities associated with each of the data rates. The method 500 then loops back to step 520.
In the foregoing description, embodiments have been described as applied to opportunistic link adaptation techniques in which data rates of a set of data rates are stochastically selected to optimize data throughput. However, this application is illustrative of only one possible application, and the disclosed techniques can generally be applied in a wide variety of other contexts for stochastically selecting any type of sample amongst a population of samples. For example, other possible applications that occur in the context of wireless communication networks include, but are not limited to, selection of neighbor nodes for ranging services, and link quality probing or prediction.
In the foregoing specification, specific embodiments have been described. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present teachings. The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of any or all the claims. The invention is defined solely by the appended claims including any amendments made during the pendency of this application and all equivalents of those claims as issued.
Moreover in this document, relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” “has,” “having,” “includes,” “including,” “contains,” “containing” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises, has, includes, contains a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by “comprises . . . a,” “has . . . a,” “includes . . . a,” “contains . . . a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises, has, includes, contains the element. The terms “a” and “an” are defined as one or more unless explicitly stated otherwise herein. The terms “substantially”, “essentially”, “approximately”, “about” or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art, and in one non-limiting embodiment the term is defined to be within 10%, in another embodiment within 5%, in another embodiment within 1% and in another embodiment within 0.5%. The term “coupled” as used herein is defined as connected, although not necessarily directly and not necessarily mechanically. A device or structure that is “configured” in a certain way is configured in at least that way, but may also be configured in ways that are not listed.
It will be appreciated that some embodiments may be comprised of one or more generic or specialized processors (or “processing devices”) such as microprocessors, digital signal processors, customized processors and field programmable gate arrays (FPGAs) and unique stored program instructions (including both software and firmware) that control the one or more processors to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the method and/or apparatus described herein. Alternatively, some or all functions could be implemented by a state machine that has no stored program instructions, or in one or more application specific integrated circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic. Of course, a combination of the two approaches could be used.
Moreover, an embodiment can be implemented as a computer-readable storage medium having computer readable code stored thereon for programming a computer (e.g., comprising a processor) to perform a method as described and claimed herein. Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory) and a Flash memory. Further, it is expected that one of ordinary skill, notwithstanding possibly significant effort and many design choices motivated by, for example, available time, current technology, and economic considerations, when guided by the concepts and principles disclosed herein will be readily capable of generating such software instructions and programs and ICs with minimal experimentation.
The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.